• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

PoSvc Implimentation Discussion...

coingun

Well-known member
Masternode Owner/Operator
Nice to see the cow moo'ing again!!! Specially after all the discussion.

16:16 <@gitcow> [ darkcoin | master | Evan Duffield | 2 days, 10 hours, 5 minutes ago ] bb7672f Complete implementation of Proof-of-Service (not debugged)

https://github.com/darkcoin/darkcoin/commit/bb7672fb0f6d5d57475b250f633bd2cebe6c3e10

So what is PoSvc, well it checks:

1.) Making sure Masternodes have their ports open
2.) Are responding to requests made by the network

Interesting future expansion: Want to be able to prove nodes have many qualities such as a specific CPU speed, bandwidth, and dedicated storage.
E.g. We could require a full node be a computer running 2GHz with 10GB of space.
This makes me very excited to think about the future of the Masternode network. A time in the future when this PoSvc code could detect a masternode offering VPN services, storage services, really anything that can live on top of a VPS.

It also got me to thinking. For this type of future expansion shouldn't we be starting to think about how we could solve the need for remote ip's. I was almost envisioning turning our publicly published remote ip into an nginx reverse proxy that could proxy all types of "external" requests to the internal network of mn's. Sort of like how when given a subnet you are given a routing address that all your ip's need to leave from.

Now this is a far from finished thought but could that give us the ability to have perhaps two at the most unqiue ip's per masternode cluster. Perhaps that would be a better way, is let each MN publish two (four) for load balancing or DDOS. Then you could use a type of pfsync almost to keep traffic in sync and load balance.

Any other networking guys care to take it from here or point out the total mis-understanding as to why this wouldn't work?
 
Nice to see the cow moo'ing again!!! Specially after all the discussion.

16:16 <@gitcow> [ darkcoin | master | Evan Duffield | 2 days, 10 hours, 5 minutes ago ] bb7672f Complete implementation of Proof-of-Service (not debugged)

https://github.com/darkcoin/darkcoin/commit/bb7672fb0f6d5d57475b250f633bd2cebe6c3e10

So what is PoSvc, well it checks:

1.) Making sure Masternodes have their ports open
2.) Are responding to requests made by the network

Interesting future expansion: Want to be able to prove nodes have many qualities such as a specific CPU speed, bandwidth, and dedicated storage.
E.g. We could require a full node be a computer running 2GHz with 10GB of space.
This makes me very excited to think about the future of the Masternode network. A time in the future when this PoSvc code could detect a masternode offering VPN services, storage services, really anything that can live on top of a VPS.

It also got me to thinking. For this type of future expansion shouldn't we be starting to think about how we could solve the need for remote ip's. I was almost envisioning turning our publicly published remote ip into an nginx reverse proxy that could proxy all types of "external" requests to the internal network of mn's. Sort of like how when given a subnet you are given a routing address that all your ip's need to leave from.

Now this is a far from finished thought but could that give us the ability to have perhaps two at the most unqiue ip's per masternode cluster. Perhaps that would be a better way, is let each MN publish two (four) for load balancing or DDOS. Then you could use a type of pfsync almost to keep traffic in sync and load balance.

Any other networking guys care to take it from here or point out the total mis-understanding as to why this wouldn't work?
It's all Greek to me. Maybe you can start by explaining to us some terminology in your post, like "an nginx reverse proxy", "pfsync"... How about rewrite this and explain in n00bs' language, easier for noobs to understand... lol
 
Ditto.

Anyone care to explain this in layman's terms? In practice, what does it entail?
 
Ditto.

Anyone care to explain this in layman's terms? In practice, what does it entail?
Let me have a go at it. Let's take a user that has 10 masternode nodes as an example. In current day terms what does that look like? Let's assume we aren't using start many. This means you have 10 local wallets. 10 remote wallets. You also have 10 remote side VPS's (all probably 512mb's of ram) and 10 external ips.

This is all done so that your masternode's can provide both instantx and ds+ to the network. What i'm trying to envision is a way to make it easier and scale better when we have many differnet node's providing many different types of services. Looking forward how could we scale this technology. We are running out of IP's and we need to be prepared for IPv6 when ip's will be replaced by subnets. In IPv6 there is enough ip's that each of todays computer's can have an entire subnet. So going along with that it got me thinking. Why couldn't we let the masternode owners publish to the network a list of say up to 4 public ip's that will route traffic to their "masternode network". This would mean that all the 10 masternodes could sit behind these new masternode type routers which would route traffic for the nodes behind them. There is other reasons this could be helpful too such as DDoS protection and stuff.

The reason I'm saying two or four ip's is that should be enough for you to have essentially have masternode routers acting as a cluster. When I mentioned (pfsync) last time. This is a technology that is used in high availability routers that essentially allows them to pass traffic back and fourth when one router needs to reboot etc. An nginx reverse proxy is another way of doing this type of high availabilty experience but is used on websites. Maybe a combination of something of the two.

So what would this mean and how would it look. Let's take a user like oblox. We all know oblox has 100 masternodes. So now instead of having to have 100 public ip's for those masterndoes oblox could just have 4 public ips. Those ips would all be answered by a proxy that forwarded on each request to the specific masternode. To go a step further oblox's masternode routers could also announce to the network what type of services are offered behind them and what resources are available to be consumed by the network.

I guess this would introduct a third type of node a masternode router but it might be worth the extra overhead. Of course if you didn't have enough mn's that you needed this you could just publish your 1-4 ips or smething.

Let me know where you get lost and I will try and fill some gaps...
 
Last edited by a moderator:
I'm a bit of a fish out of water here, but that seem to open a world of possibilities if I got it right!!

:grin:

Oh Evan... I know Dash must be making you sleepless... but when-oh-when will you answer my extremely anxious awaiting email... ? :grin::grin::grin:
 
I'm a bit of a fish out of water here, but that seem to open a world of possibilities if I got it right!!

:grin:

Oh Evan... I know Dash must be making you sleepless... but when-oh-when will you answer my extremely anxious awaiting email... ? :grin::grin::grin:
Correct it has a lot of potential. I have tried unsuccessfully to bring this up for discussion over the last few days but to much naming and logo talk. I guess I now realize that anything technical is over the heads of the people on BCT.

I'm trying to figure out how we can scale the network's that masternode operators are running and while having proper routers in place would suuuuuure help fix a bunch of stuff it would add overhead but it might be the lessor of the two evils.

Too bad my otehr suggestion about keeping the second coin and using it as a merged mined coin to bring extra liquidity and no extra hashing expense also got burried. I was almost envisioning an enitre relaucnh of the second coin that could be used as a huuuge PR campaign and could be relaucnhed without a pre-mine in a super open fashion. Alsmot like a stock split?!?!?!

Never mind i'm off topic of names and logos... fml
 
Let me have a go at it. Let's take a user that has 10 masternode nodes as an example. In current day terms what does that look like? Let's assume we aren't using start many. This means you have 10 local wallets. 10 remote wallets. You also have 10 remote side VPS's (all probably 512mb's of ram) and 10 external ips.

This is all done so that your masternode's can provide both instantx and ds+ to the network. What i'm trying to envision is a way to make it easier and scale better when we have many differnet node's providing many different types of services. Looking forward how could we scale this technology. We are running out of IP's and we need to be prepared for IPv6 when ip's will be replaced by subnets. In IPv6 there is enough ip's that each of todays computer's can have an entire subnet. So going along with that it got me thinking. Why couldn't we let the masternode owners publish to the network a list of say up to 4 public ip's that will route traffic to their "masternode network". This would mean that all the 10 masternodes could sit behind these new masternode type routers which would route traffic for the nodes behind them. There is other reasons this could be helpful too such as DDoS protection and stuff.

The reason I'm saying two or four ip's is that should be enough for you to have essentially have masternode routers acting as a cluster. When I mentioned (pfsync) last time. This is a technology that is used in high availability routers that essentially allows them to pass traffic back and fourth when one router needs to reboot etc. An nginx reverse proxy is another way of doing this type of high availabilty experience but is used on websites. Maybe a combination of something of the two.

So what would this mean and how would it look. Let's take a user like oblox. We all know oblox has 100 masternodes. So now instead of having to have 100 public ip's for those masterndoes oblox could just have 4 public ips. Those ips would all be answered by a proxy that forwarded on each request to the specific masternode. To go a step further moli's masternode routers could also announce to the network what type of services are offered behind them and what resources are available to be consumed by the network.

I guess this would introduct a third type of node a masternode router but it might be worth the extra overhead. Of course if you didn't have enough mn's that you needed this you could just publish your 1-4 ips or smething.

Let me know where you get lost and I will try and fill some gaps...

Brilliant! Very secure!

Multiple types of service.
 
I'm trying to figure out how we can scale the network's that masternode operators are running and while having proper routers in place would suuuuuure help fix a bunch of stuff it would add overhead but it might be the lessor of the two evils.

goddamn my low IT skills to full it grasp the implications, but I do see the potencial !
 
Last edited by a moderator:
And I sent in a paper for review where this development, goddamn my low IT skills to be 100% certain, where this implementation would knock it out of the park !!!

q6H5xtn.png
This is sort of what got me thinking about it. Since there a lot of dev's around that run dc's, racks full of gear and routing. I started to think why don't we just handle the masternode's public ip space the same way datacenters handle routing of subnets. Subnets get assigned and are given a route they must leave through. We could do the exact same thing.

I used the explaination of an iceberg before. In that IX and DS+ are the first two pieces of hte iceberg visible above the water but there can be so much more down under there. We just don't want to overwelm the user so we show them...

Just the tip...
 
Its a natural evolution, but at the same time a "funneling" of sorts, in regards to a concerted attempt to subvert the network?

'cos then, one huge cluster could effectively be taken down, or used to empower the network, or perform some sort of 51% attack. The door opens both ways, right?

Lets imagine 52% of the MN network propagate in a precise coordination?

.
 
When I hear 'Proof of Service' it makes me think of a generic system that generates revenue to MN owners for hosting whatever future services lots of innovative Dash developers come up with through some trendy API. Does it do that??? :)
 
When I hear 'Proof of Service' it makes me think of a generic system that generates revenue to MN owners for hosting whatever future services lots of innovative Dash developers come up with through some trendy API. Does it do that??? :)
That would be specifically what we could create. The app store of services for dev's to build on.

Finally someone is looking outside of the box enough. Love it!
 
Pity 90% of the hashrate is centralised via 3 pools.

Why not shift blockchain security to the Masternodes using a similar Proof of Service model? It would be literally thousands of times more secure than the current useless arrangement of miners.
Can you elaborate on how that could work into this discussion. To me the Proof of Service model that is currently being discussed is more one that is attempting to solve the problem of keeping the masternode list consolidated as well as knowing what services could in the future be available inside of hte masternode object.
 
Everyone, beware, coingun is a good fudster -- false, unaccounted-data spreader.

He could be Mr Spread!

:tongue::tongue::tongue::tongue::tongue::tongue::tongue:
 
Everyone, beware, coingun is a good fudster -- false, unaccounted-data spreader.

He could be Mr Spread!

:tongue::tongue::tongue::tongue::tongue::tongue::tongue:
Would you be my Mrs. Spread?
 
Can you elaborate on how that could work into this discussion. To me the Proof of Service model that is currently being discussed is more one that is attempting to solve the problem of keeping the masternode list consolidated as well as knowing what services could in the future be available inside of hte masternode object.
Here's my 2 cents:
Coming from an IT/networking background I see your ideas involving using nginx and preparing for the future possibility of IPv6 prevalence as creative and doable. Months ago Evan talked about having MNs route traffic between other MNs to create a network similar in goals to TOR: anonymizing traffic, avoiding censorship etc. It seems that these type of networking ideas have been previously discussed to an extent. I remember Evan announcing via Twitter that the core dev. team had added a member with a huge amount of previous networking experience so I would reach out to that guy/gal about it.
 
I find all this very exciting:cool: and very incomprehensible:confused:.

But FWIW, it looks to me like the intent of Evan's addition of this code is to specifically address the problem with BTC that he noted in a recent post pointing to this link: http://www.reddit.com/r/Bitcoin/comments/2yvy6b/a_regulatory_compliance_service_is_sybil/

I am probably misunderstanding, but it sounds like the offending nodes are simply listening on the network and not actually contributing. Isn't this precisely what this code forbids?

That said, I really do like what I understand of the ideas being shared here.:smile:
 
Back
Top