• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

For those running several nodes, what's your approach for static IP addresses?

DyslexicZombei

New member
I'm looking into some options for scaling Masternodes.

Let's say for example, I needed 30 static IP addresses for 30 nodes. For those facing this problem how did you deal with it?

I'm looking at a mix of EC2 and may opt for Amazon VPC if I can figure out how to manage the static IP address problem for scaling up nodes with VPC. As we all know ipv4 addresses are almost gone, and getting 30 static ipv4 IPs likely isn't easy for one person. I'm also concerned worried about elastic IPs that you can count on through server restarts so I'm curious to see how some of you approached this dilemna.

I wish I could just NAT all the nodes behind a gateway so all of those nodes could just use one IP addy.
 
If you are spending close to $300,000 on nodes, finding 30 ipv4 addresses should be trivial. You have 100s of options at your disposal.

However to me, only one makes sense. Spread that out!!! You get 5 EIPs with amazon, put 5 nodes there. Next go to linode and put 5 more there. Digital ocean next, 5 more nodes. Azure can handle your next 5. Rackspace the next 5 and the final 5 can go to google cloud compute.

Or just get 6 amazon accounts.

Or, probably your best option, get a single dedicated server from a quality provider that can handle your IP requirement.

I recommend OVH, this server specifically: http://www.ovh.com/us/dedicated-servers/enterprise/2014-SP-64.xml It would be roughly $110 a month. (Plus $90 upfront for the 30 IPs, but that is a one time cost, NOT monthly). Then I would split it to 30 virtual machines, each with 1 core and 2 gb ram. (All for < $4 per node!) and route an IP to each. This option is probably going to be your cheapest and easiest to manage option, given everything is all on one server. There are downsides to this, for example if your fuck up the configuration and say leave the private key in your logs somewhere and a hacker does get in, well that might be 30000 DRK gone as opposed to 1000 DRK, which is the prime benefit of significantly spreading them out.

OVH has great DDOS protection as well, and would handle an attack better than amazon.
 
Thank you! This is definitely the type of feedback I was looking for. I hadn't even considered OVH, so I appreciate the lead. I really love your OVH server suggestion as you obviously considered vCPU and memory requirements. I had considered the other hosting options you discussed but I'm actually planning to set up more than 30 nodes, so that would start getting unwieldy for me. :)

BTW, with cold storage the 30K DRK probably won't be at risk but I suppose the node could still be disabled and re-purposed as someone else's node if they somehow subverted all the security.

Another question: have you ever tried moving an Amazon image to a different provider? Just wondering if that's problematic or if you have to set up some sort of 3rd party or app solution to move an Amazon image.
 
Last edited by a moderator:
Thank you! This is definitely the type of feedback I was looking for. I hadn't even considered OVH, so I appreciate the lead. I really love your OVH server suggestion as you obviously considered vCPU and memory requirements. I had considered the other hosting options you discussed but I'm actually planning to set up more than 30 nodes, so that would start getting unwieldy for me. :)

BTW, with cold storage the 30K DRK probably won't be at risk but I suppose the node could still be disabled and re-purposed as someone else's node if they somehow subverted all the security.

Another question: have you ever tried moving an Amazon image to a different provider? Just wondering if that's problematic or if you have to set up some sort of 3rd party or app solution to move an Amazon image.
It depends on where you're trying to move it. If you are moving it to physical hardware that you can setup the visualization environment, then yes, you can export the image itself, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExportingEC2Instances.html.

If you wanted to move it to a different cloud provider, unless you could import the instance from a vSphere/etc file, I'm not really sure how it would work.


Now that I actually look at your username and realize what *your* purposes are, none of the above really applies.
If I was you, I would just have a chef/puppet/pick your tool script setup to launch, configure/setup your software, and be done with it. With that, setting up a server would be as easy as running one command, though it may be a little tricky to get the masternode itself configured, I've never tried to do something like that. But getting the enviroment you want setup and the security setup the way you want it, that can be done in a single command, once you have it scripted!
 
It depends on where you're trying to move it. If you are moving it to physical hardware that you can setup the visualization environment, then yes, you can export the image itself, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExportingEC2Instances.html.

If you wanted to move it to a different cloud provider, unless you could import the instance from a vSphere/etc file, I'm not really sure how it would work.


Now that I actually look at your username and realize what *your* purposes are, none of the above really applies.
If I was you, I would just have a chef/puppet/pick your tool script setup to launch, configure/setup your software, and be done with it. With that, setting up a server would be as easy as running one command, though it may be a little tricky to get the masternode itself configured, I've never tried to do something like that. But getting the enviroment you want setup and the security setup the way you want it, that can be done in a single command, once you have it scripted!

Thanks again for the response. Please post a DRK address so I can send a tip.

I actually don't have a lot of free DRK at the moment having most of it tied up in our first node but I would like to send you DRK from my GPU for the useful reponses. I'm glad to be a community conduit to help set up and manage these nodes but it became a bit of: "be careful what you wish for" as I've fortunately been inundated with full node reservations.

Yes, scripts are beautiful when done right, and can be a PITA to deal with when not working right (cron jobs with something like AIDE, for example, which is what I was doing at work today are always fun to decipher, sigh, what with the dearth of actionable info on error codes). I actually think your suggestion will work for now while I work on a more permanent solution. Ideally, it'd be scripted to the point where it was secure and ready for you to put in the Node details that are custom to that node.

It's those darn individual ipv4 static IPs that concern me having to deal with a bunch of nodes, but you've given me useful info where I can act on it, so thank you. I'm usually designing and/or installing/admin of LANs, wireless field LANs, & local/remote servers of various types, but the whole scaling of it virtually to the cloud is actually a bit new to me.

Perhaps if I paid someone to script this, like you mentioned...
 
Last edited by a moderator:
Yes, it'd all be images on virtual hardware without having physical access to the DC. No way I could host servers here although I did briefly think about hosting the first node on an old laptop or netbook to get started. I may just need to get a base image going at OVH if I'm going to go that route.

My electricity rate is 50 cents a KwH so I personally only mine DRK with a 750TI GPU and Scrypt coins with two Zeusminer Blizzards (about 140-150W for all of em). Not too concerned about short term ROI with those coins: if you think about it, someone who mined BTC for almost free or just a few cents on a CPU in 2009-2010 would've been rewarded with the BTC equivalent of thousands of $ per hour (sometimes more) if they were prescient enough to have kept it.

My wife didn't want me to do one of those no money down solar PPAs for 20-25 years, but I guess with the way solar costs are going, as long as I get it installed before the Fed credit deadline in 2016, the longer you wait the more bang for the buck you get.
 
Last edited by a moderator:
Thanks again for the response. Please post a DRK address so I can send a tip.

I actually don't have a lot of free DRK at the moment having most of it tied up in our first node but I would like to send you DRK from my GPU for the useful reponses. I'm glad to be a community conduit to help set up and manage these nodes but it became a bit of: "be careful what you wish for" as I've fortunately been inundated with full node reservations.

Yes, scripts are beautiful when done right, and can be a PITA to deal with when not working right (cron jobs with something like AIDE, for example, which is what I was doing at work today are always fun to decipher, sigh, what with the dearth of actionable info on error codes). I actually think your suggestion will work for now while I work on a more permanent solution. Ideally, it'd be scripted to the point where it was secure and ready for you to put in the Node details that are custom to that node.

It's those darn individual ipv4 static IPs that concern me having to deal with a bunch of nodes, but you've given me useful info where I can act on it, so thank you. I'm usually designing and/or installing/admin of LANs, wireless field LANs, & local/remote servers of various types, but the whole scaling of it virtually to the cloud is actually a bit new to me.

Perhaps if I paid someone to script this, like you mentioned...
Happy to help!
XdFjCpPgjscvs42Fgn1dZPxeygmnenZLa7

Error codes with no documentation are the absolute worst. If the software was open source it would be easy enough to just search the source for the error code and you'd be halfway there. But it never is open source software with the issue.
 
Hi, have you thought about using ipv6? It just occurred to me that this would make it much easier and when looking at the makefile it seems to be possible to use.
And if all daemons run on the same machine it would be possible to keep the traffic down if you force all but 2 or 3 to use the local machines (with that addnode= command). The setup with multiple users who run the daemon versus 30 virtual machines has multiple advantages:
  • imagine you need to upgrade the client
  • updating to the latest version of packages is a hassle in that many virtual machines which are supposed to be the same
  • you waste a lot of resources because 200 MB of RAM per virtual machine is reasonable for just the OS
  • hard disk performance in virtual box machines are not that awesome (at least use the nfs mounts instead) and Linux tends to do constant random hard disk accesse (logging, swapping etc) and if you have 30 machines doing that the host load will explode
  • monitoring could be easier though
Cheers
 
Hi, have you thought about using ipv6? It just occurred to me that this would make it much easier and when looking at the makefile it seems to be possible to use.
And if all daemons run on the same machine it would be possible to keep the traffic down if you force all but 2 or 3 to use the local machines (with that addnode= command). The setup with multiple users who run the daemon versus 30 virtual machines has multiple advantages:
  • imagine you need to upgrade the client
  • updating to the latest version of packages is a hassle in that many virtual machines which are supposed to be the same
  • you waste a lot of resources because 200 MB of RAM per virtual machine is reasonable for just the OS
  • hard disk performance in virtual box machines are not that awesome (at least use the nfs mounts instead) and Linux tends to do constant random hard disk accesse (logging, swapping etc) and if you have 30 machines doing that the host load will explode
  • monitoring could be easier though
Cheers
chaeplin have you considered ipv6 yet?
 
Hi, my root server arrived and I am doing my first tests with it and I am playing around with the port=XXX Parameter in the darkcoin.conf. Using IPv6 would be a bit overdone if it would be possible to run several darkcoin daemons simply listening on multiple ports. Would that make problems with masternodes? And would make life much easier because binding to several ip addresses is at least costly
Thanks for Answers
 
Hi, my root server arrived and I am doing my first tests with it and I am playing around with the port=XXX Parameter in the darkcoin.conf. Using IPv6 would be a bit overdone if it would be possible to run several darkcoin daemons simply listening on multiple ports. Would that make problems with masternodes? And would make life much easier because binding to several ip addresses is at least costly
Thanks for Answers
+1 wondering the same. i'm currently not having the time to test ipv6. could anyone just try out playing round with ipv6 subnets on a single machine? never seen any masternode using ipv6 yet....
 
+1 wondering the same. i'm currently not having the time to test ipv6. could anyone just try out playing round with ipv6 subnets on a single machine? never seen any masternode using ipv6 yet....
I will test that today when i am not "supposed to work" anymore and I have big hopes for a setup where I have just X configs for X daemons and X ports and a single user and a single ipv4 address would enough to make it work. because that would be even easier than ipv6 subnets or X separate ip addresses...
 
Don't bother, you won't be able to register a masternode that way.

Hot nodes will just crash and a hot/cold setup will give you

CDarkSendPool::RegisterAsMasterNode() - Invalid port
 
Last edited by a moderator:
I don't think it's intended to run on a different port ;)
Damn it :) It haven't had any luck with ipv6 and haven't really tried. I was thinking the port mapping should be perfectly fine. Can anyone help me direct to the piece of code where this is triggered? Or is it in the closed Source part?
 
I don't get the port mapping thing anyway. You can just use bind=*:1234 in darkcoin.conf for what those scripts seem to achieve ;)

It's in the closed source. But it's obviously not supposed to run on another port as IDA reveals:

Code:
cmp ax, 270Fh                   ; port = 9999?
jz loc_44D167                   ; go ahead with starting the node
mov [esp+458h+var_458], offset aCdarksendpoolR   ; "CDarkSendPool::RegisterAsMasterNode() - Invalid port"
call sub_428480                 ; print message
...
call exit

if it's 270Fh = 9999 it proceeds, if it's not 9999 it prints "blah Invalid port" and exits.
 
Last edited by a moderator:
Back
Top