IPV6 addresses

crowning

Well-known Member
May 29, 2014
1,414
1,997
183
Alpha Centauri Bc
Yep! It works on testnet.

So, in summary: IPv4 not required for anything really. No tunneling, none of that.

1. Just have separate IPv6 addresses setup correctly on your host.
2. Bind to a certain IPv6 address for each dashd process, e.g.:

Code:
bind=[address1]:9999
...and in a different config file for a different process:

Code:
bind=[address2]:9999
... and that's it. Set up your local masternode.conf correctly, ensure the port is open for IPv6 on your firewall and you should be good. At least, it worked for me on testnet.
One question: I normally start my Masternodes from home (with IPv4-address:9999 in the configuration file) with a cold wallet.
My ISP doesn't yet offer IPv6, so I most probably can't simply put an IPv6-address into the local configuration file. How do you start the Masternode(s)?

What I will try once I've moved the first Masternodes to an IPv6 server (which also has one single IPv4-address) is to activated them via IPv4:9999, IPv4:9998, IPv4:9997 and so on and pray that they connect via the (there) configured IPv6 address to the network.

Anyone tried this already?
 

UdjinM6

Official Dash Dev
Dash Core Group
May 20, 2014
3,639
3,537
1,183
One question: I normally start my Masternodes from home (with IPv4-address:9999 in the configuration file) with a cold wallet.
My ISP doesn't yet offer IPv6, so I most probably can't simply put an IPv6-address into the local configuration file. How do you start the Masternode(s)?

What I will try once I've moved the first Masternodes to an IPv6 server (which also has one single IPv4-address) is to activated them via IPv4:9999, IPv4:9998, IPv4:9997 and so on and pray that they connect via the (there) configured IPv6 address to the network.

Anyone tried this already?
You can run mainnet masternode only on port 9999.

EDIT: and testnet on any port but 9999
 

nmarley

Administrator
Dash Core Group
Jun 28, 2014
369
427
133
One question: I normally start my Masternodes from home (with IPv4-address:9999 in the configuration file) with a cold wallet.
My ISP doesn't yet offer IPv6, so I most probably can't simply put an IPv6-address into the local configuration file. How do you start the Masternode(s)?
I think it actually sends the command over the MN network to start it, right? I might be completely wrong, but my (purely) guess is that you could still put an IPv6 address into the local MN config file and even though your ISP doesn't support it, it wouldn't matter, b/c it would send the "mn_start" command over the wire (via IPv4), but then the MN network would ensure that the start command got to your IPv6 server via ipv6.

Please correct me if my understanding is off on this part, which it very well may be.
 

crowning

Well-known Member
May 29, 2014
1,414
1,997
183
Alpha Centauri Bc
I think it actually sends the command over the MN network to start it, right? I might be completely wrong, but my (purely) guess is that you could still put an IPv6 address into the local MN config file and even though your ISP doesn't support it, it wouldn't matter, b/c it would send the "mn_start" command over the wire (via IPv4), but then the MN network would ensure that the start command got to your IPv6 server via ipv6.

Please correct me if my understanding is off on this part, which it very well may be.
I'll get a server with an /64 IPv6 subnet in the near future and will test this.

And if it doesn't work investigate why it doesn't work and implement what you suggested above.

And make UdjinM6 happy by breaking lots of things on my way :D
 

UdjinM6

Official Dash Dev
Dash Core Group
May 20, 2014
3,639
3,537
1,183
I'll get a server with an /64 IPv6 subnet in the near future and will test this.

And if it doesn't work investigate why it doesn't work and implement what you suggested above.

And make UdjinM6 happy by breaking lots of things on my way :D
:tongue:
 
  • Like
Reactions: crowning

Solarminer

Well-known Member
Apr 4, 2015
762
922
163
One question: I normally start my Masternodes from home (with IPv4-address:9999 in the configuration file) with a cold wallet.
My ISP doesn't yet offer IPv6, so I most probably can't simply put an IPv6-address into the local configuration file. How do you start the Masternode(s)?

What I will try once I've moved the first Masternodes to an IPv6 server (which also has one single IPv4-address) is to activated them via IPv4:9999, IPv4:9998, IPv4:9997 and so on and pray that they connect via the (there) configured IPv6 address to the network.

Anyone tried this already?
#1. You don't need IPV6 at home to start a remove server. Just put the ipv6 in with the right format [xxxx:xxxx::xxxx:xxxx]:9999.
#2. Like udjinM6 said - all WAN ports need to be 9999
#3. If you have multiple IPV4s on the same server. Forward the wan 9999 and 9998 ports to a different ports on the machine on the primary IP. Specify the port, rpcport, and bind address in your dash.conf. Iptables is your friend.
#4. If you want to forward IPV6s you need Ubuntu15. But you don't need to forward ports, you can start nodes one at a time without any others running. Once they are started you can run them at the same time and they will hot start themselves. Otherwise, try your luck at forwarding with ip6tables.
 
  • Like
Reactions: crowning

crowning

Well-known Member
May 29, 2014
1,414
1,997
183
Alpha Centauri Bc
First tests don't look good.
I'm (still) using dash.conf only, but that shouldn't matter:

dash.conf of Masternode:
bind=[xxxx:xxx::xx:xxxx::xxx]:9999
listen=1
server=1
daemon=1
externalip=xxxx:xxx:xx:xxxx::xxx
masternode=1
masternodeprivkey=<mykey>

and debug.log says:
Bound to [xxxx:xxx:xx:xxxx::xxx]:9999
AddLocal([xxxx:xxx:xx:xxxx::xxx]:9999,4)

Looks good on the hot side.

dash.conf of local wallet:
masternode=1
masternodeprivkey=<mykey>
masternodeaddr=[xxxx:xxx::xx:xxxx::xxx]:9999

But all I get is "Not capable masternode: Could not connect to [xxxx:xxx::xx:xxxx::xxx]:9999"

I'll try the same in debug-mode to get the real reason.
If I didn't miss anything obvious it looks like starting an IPv6-Masternode from an IPv4-wallet does not work.

Edit:
Yep, it indeed tries to connect via IPv6, and since my ISP is IPv4 only it fails:
Code:
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - Checking inbound connection to '[xxxx:xxx:xx:xxxx::xxx]:9999'
2016-03-22 22:40:52 connect() to [xxxx:xxx:xx:xxxx::xxx]:9999 failed: Network is unreachable (101)
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - not capable: Could not connect to [xxxx:xxx:xx:xxxx::xxx]:9999
I think it's time to build an IPv6 tunnel next weekend...
 
Last edited by a moderator:

Solarminer

Well-known Member
Apr 4, 2015
762
922
163
First tests don't look good.
I'm (still) using dash.conf only, but that shouldn't matter:

dash.conf of Masternode:
bind=[xxxx:xxx::xx:xxxx::xxx]:9999
listen=1
server=1
daemon=1
externalip=xxxx:xxx:xx:xxxx::xxx
masternode=1
masternodeprivkey=<mykey>

and debug.log says:
Bound to [xxxx:xxx:xx:xxxx::xxx]:9999
AddLocal([xxxx:xxx:xx:xxxx::xxx]:9999,4)

Looks good on the hot side.

dash.conf of local wallet:
masternode=1
masternodeprivkey=<mykey>
masternodeaddr=[xxxx:xxx::xx:xxxx::xxx]:9999

But all I get is "Not capable masternode: Could not connect to [xxxx:xxx::xx:xxxx::xxx]:9999"

I'll try the same in debug-mode to get the real reason.
If I didn't miss anything obvious it looks like starting an IPv6-Masternode from an IPv4-wallet does not work.

Edit:
Yep, it indeed tries to connect via IPv6, and since my ISP is IPv4 only it fails:
Code:
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - Checking inbound connection to '[xxxx:xxx:xx:xxxx::xxx]:9999'
2016-03-22 22:40:52 connect() to [xxxx:xxx:xx:xxxx::xxx]:9999 failed: Network is unreachable (101)
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - not capable: Could not connect to [xxxx:xxx:xx:xxxx::xxx]:9999
I think it's time to build an IPv6 tunnel next weekend...
I doubt it is your ipv4 only connection. Let's try to fix your config.

Try server=0 to disable the RPC commands. Otherwise make sure both 9999 and 9998 are open. You can't have the same ports with multiple IPV4 nodes - IPV6 you can get away with it starting one at a time.
rpcport=9998
port=9999
You need brackets around your external IP. [xxxx:xxx::xx:xxxx::xxx]
I have the 9999 on the external IP instead of bind IP - like this:
bindip=[xxx:xxx::xxxx:xxxx]
externalip=[xxxx:xxx::xxxx:xxxx]:9999

Also I only use the masternode.conf setup. I thought there was a problem with the dash.conf files and starting nodes, but maybe it was just that they didn't allow donate addresses....which doesn't matter now anyway.
 
Last edited by a moderator:

nmarley

Administrator
Dash Core Group
Jun 28, 2014
369
427
133
First tests don't look good.
I'm (still) using dash.conf only, but that shouldn't matter:

dash.conf of Masternode:
bind=[xxxx:xxx::xx:xxxx::xxx]:9999
listen=1
server=1
daemon=1
externalip=xxxx:xxx:xx:xxxx::xxx
masternode=1
masternodeprivkey=<mykey>

and debug.log says:
Bound to [xxxx:xxx:xx:xxxx::xxx]:9999
AddLocal([xxxx:xxx:xx:xxxx::xxx]:9999,4)

Looks good on the hot side.

dash.conf of local wallet:
masternode=1
masternodeprivkey=<mykey>
masternodeaddr=[xxxx:xxx::xx:xxxx::xxx]:9999

But all I get is "Not capable masternode: Could not connect to [xxxx:xxx::xx:xxxx::xxx]:9999"

I'll try the same in debug-mode to get the real reason.
If I didn't miss anything obvious it looks like starting an IPv6-Masternode from an IPv4-wallet does not work.

Edit:
Yep, it indeed tries to connect via IPv6, and since my ISP is IPv4 only it fails:
Code:
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - Checking inbound connection to '[xxxx:xxx:xx:xxxx::xxx]:9999'
2016-03-22 22:40:52 connect() to [xxxx:xxx:xx:xxxx::xxx]:9999 failed: Network is unreachable (101)
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - not capable: Could not connect to [xxxx:xxx:xx:xxxx::xxx]:9999
I think it's time to build an IPv6 tunnel next weekend...
How are you hosting your MN? On a VPS somewhere in a data center?

I don't think you need externalip. This is my dash.conf from an MN (all my MN's are ipv6):

Code:
# basic settings
txindex=1
testnet=0
listen=1
daemon=1
logtimestamps=1
maxconnections=256

# server=1 tells darkcoin-QT to accept JSON-RPC commands.
server=1
rpcuser=dashrpc
rpcpassword=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
rpcallowip=127.0.0.1

# masternode settings
masternode=1
masternodeprivkey=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
bind=[xxxx:xxxx:x:xx::xxx:xxxx]:9999
The corresponding line from my masternode.conf (not very helpful, but...):

Code:
mnalias [xxxx:xxxx:x:xx::xxx:xxxx]:9999 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 0

I don't know anything about building tunnels, re-routing, port forwarding, or any of that, but I haven't needed to. My VPS server has IPv6 enabled & I've simply used the above config and opened those ports on there and it worked just fine (but I did have to change the privkey when moving from IPv4 to IPv6).

Is your IPv6 port open on your server?

Code:
$ ufw status
Status: active

To                         Action      From
--                         ------      ----
9999/tcp                   ALLOW       Anywhere
9999/tcp (v6)              ALLOW       Anywhere (v6)

What if you remove these from your local dash.conf and use masternode.conf instead?
Code:
masternode=1
masternodeprivkey=<mykey>
masternodeaddr=[xxxx:xxx::xx:xxxx::xxx]:9999

You shouldn't need to disable RPC, I don't think that should affect this at all.
 

crowning

Well-known Member
May 29, 2014
1,414
1,997
183
Alpha Centauri Bc
How are you hosting your MN? On a VPS somewhere in a data center?
VPS. The usual setup, one IPv4 address and a /64 IPv6 subnet.

I don't think you need externalip. This is my dash.conf from an MN (all my MN's are ipv6):
externalip just avoids that dashd tries to find it itself. It's often not needed, but removes one possible point of failure. Besides this, my dash.conf looks the same as yours.

Is your IPv6 port open on your server?
Yes, both directions (I'm using iptables, not ufw for firewalling):
Code:
[email protected] /tmp iptables -L
...
ACCEPT  tcp  --  anywhere  anywhere  tcp dpt:9999
ACCEPT  tcp  --  anywhere  anywhere  tcp spt:9999
What if you remove these from your local dash.conf and use masternode.conf instead?
Code:
masternode=1
masternodeprivkey=<mykey>
masternodeaddr=[xxxx:xxx::xx:xxxx::xxx]:9999
Shouldn't make any difference, but I'll try this when I have some more time next weekend.

The main problem is that the wallet tries to open a connection to the Masternodes IPv6 address directly, which it can't in an IPv4 network. Maybe I can temporarily start the Masternode with its IPv4 address enabled and once it's activated (and broadcasted to the Dash-network) stop it, remove the IPv4 from the config and start the dashd with IPv6-only again. Not convenient when you have several Masternodes, but worth a try...

Thanks for your help
:)
 
Last edited by a moderator:

nmarley

Administrator
Dash Core Group
Jun 28, 2014
369
427
133
The main problem is that the wallet tries to open a connection to the Masternodes IPv6 address directly, which it can't in an IPv4 network.
But that's what's confusing me... I have never had a home ISP that supports ipv6, but my nodes get activated just fine. From my home connection I've tried ipv6 pings and telnet -6 too, and no luck. But the masternodes get activated.

Your iptables output above is for ipv4, right? What about running ip6tables -L?

Code:
[email protected]:/etc/network# ip6tables -L | grep 9999
ACCEPT     tcp      anywhere             anywhere             tcp dpt:9999
Is your VPS host network device set up correctly? I have had to add entries to /etc/network/interfaces every time from a new VPS also.

The first IPv6 entry looks like this:

Code:
iface eth0 inet6 static
        address xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
        netmask 64
        gateway xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
        autoconf 0
        dns-nameservers xxxx:xxxx:xxxx::xxxx xxxx:xxxx:xxxx::8888 8.8.8.8
Subsequent ipv6 entries look like this:
Code:
iface eth0 inet6 static
        address xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
        netmask 64
(I use Digital Ocean, so this tutorial is how I got that setup: How To Enable IPv6 for DigitalOcean Droplets)

The fact that I've been able to do what you're not able to, multiple times, leads me to believe that it's not actually trying to connect from your ISP over ipv6. You should be able to activate your IPv6 masternodes from your home even if you only have an ipv4 connection from your home to the internet.
 

Solarminer

Well-known Member
Apr 4, 2015
762
922
163
I couldn't get my nodes to start without 9998 open. Once started, sure it can be closed. I had server=1 so I could use the dashwhale restarter too. Otherwise, with server=0 you probably don't need 9998 open.
 

Otaci

Member
Mar 5, 2016
46
49
58
What OS is your local wallet on? Windows does some sort of auto IPv4 to IPv6 tunnelling, not sure about Linux or Mac tho.
 

crowning

Well-known Member
May 29, 2014
1,414
1,997
183
Alpha Centauri Bc
Update: I've build an IPv6 tunnel (through an IPv4 network) with OpenVPN and started my first IPv6-Masternode from my IPv4-network sucessfully. A soon as the next one gets its payment I'll migrate it as well and see how they like each other.

Building the tunnel was a nice exercise in OpenVPN because all of the tutorials I've found where incomplete/out-of-date or just plain wrong :rolleyes:

If wanted I can post a little tutorial when a couple of nodes are migrated and my setup runs stable for a week or so.
 

Solarminer

Well-known Member
Apr 4, 2015
762
922
163
Did you use iptunnel on your server? Or actually do the tunnel on your local machine? A server side tunnel would be helpful - yes please post.

I still don't think this is needed and probably something on your VPS is setup incorrectly. Maybe ipv6 forwarding needs to be enabled. Or maybe the host needs to setup a specific set of 8 or 16 IPs vs just a /64. Obviously, this is at the limits of my understanding, so I am asking questions.

Does dash ninja show your node having a port open?
 

crowning

Well-known Member
May 29, 2014
1,414
1,997
183
Alpha Centauri Bc
Did you use iptunnel on your server? Or actually do the tunnel on your local machine? A server side tunnel would be helpful - yes please post.
I use OpenVPN. The remote IPv6 server acts as OpenVPN-server, the local IPv4 client as, ahem, client. I build an IPv4-tunnel between them and route IPv6-in-IPv4 via a virtual sit-device through it.

I still don't think this is needed and probably something on your VPS is setup incorrectly. Maybe ipv6 forwarding needs to be enabled. Or maybe the host needs to setup a specific set of 8 or 16 IPs vs just a /64. Obviously, this is at the limits of my understanding, so I am asking questions.
This wouldn't change anything:
Code:
CActiveMasternode::ManageStatus() - not capable: Could not connect to [xxxx:xxx:xx:xxxx::xxx]:9999
THIS is the problem. When I do a "dash-cli masternode start" it tries to open a socket to the IPv6 address configured in the wallet's dash.conf and produces the log above.
Can't work. Never will. Not in an IPv4 network. Sockets don't lie.
I have now idea how that could ever have worked on your machine. Care to post the "CActiveMasternode::ManageStatus()" lines from your debug-file with debug=1 in the config-file? (this only shows up in debug.log when logging is enabled and you call the "masternode start" command, so better do this on testnet )

Does dash ninja show your node having a port open?
Yep!
 

naruby

New Member
Feb 17, 2016
29
33
13
Just want to share my findings of 'a day in IPv6 land'.

I've added bindip=xx:xx:xx:xx:xx:xx:xx to my dash.conf MN.
Even when dash-cli masternode status reported the ipv6 service address and status Masternode successfully started I wasn't convinced my Masternode was configured correctly because I wrongly based my conclusiuon on dashman and dawhwales information.

Dashman status keept saying that the dashd port wasn't open and an even more obscure message that dashninja api was down.

After some digging in the Dashman code I've found the first problem:
I've configure 2 IPv6 address on my VPS (dreaming of a second MN). Dashman get the public IPv4 and/or IPv6 address from a curl call to
http://icanhazip.com/. This returned my second IPv6 address but my MN was bind to the my first IPv6. So Dashman doesn't find the open 9999 port.

Removing my second IPv6 address let me discovered the second problem in the Dashman code :
It call to
Code:
https://dashninja.pl/api/masternodes?ips=\[\"${MASTERNODE_BIND_IP}:9999
but the dashnija code parse ips as ip:port.
As the ipv6 address contains, a lot of, semicolons the call doesn't return the information that dashman needs.

moocowmoo : I really love your tool! I've even reverted back to ipv4 so I can keep using dashman.
I don't need ipv6 at the moment but I hope this information is usefull to you.


The problem with dashwales is that it keeps reporting on the website "Error: Dashwhale push script can't connect to your masternode by RPC. Your DASH masternode is probably crashed!"
If I run dwupdate manual it reports "Update status: ok". The website will the also reports thumps up but if fails again at the next scheduled check.
Not sure what's the cause.
 

crowning

Well-known Member
May 29, 2014
1,414
1,997
183
Alpha Centauri Bc
I've configure 2 IPv6 address on my VPS (dreaming of a second MN). Dashman get the public IPv4 and/or IPv6 address from a curl call to
http://icanhazip.com/. This returned my second IPv6 address but my MN was bind to the my first IPv6. So Dashman doesn't find the open 9999 port.
Add "externalip=<yourIPv6IP>" (without brackets, without port) to your Masternode's dash.conf.

Without this dashd tries to find out its own IP itself, and in an multi-IP system sometimes fails.