• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

IPV6 addresses

How are you hosting your MN? On a VPS somewhere in a data center?

VPS. The usual setup, one IPv4 address and a /64 IPv6 subnet.

I don't think you need externalip. This is my dash.conf from an MN (all my MN's are ipv6):

externalip just avoids that dashd tries to find it itself. It's often not needed, but removes one possible point of failure. Besides this, my dash.conf looks the same as yours.

Is your IPv6 port open on your server?

Yes, both directions (I'm using iptables, not ufw for firewalling):
Code:
root@MN /tmp iptables -L
...
ACCEPT  tcp  --  anywhere  anywhere  tcp dpt:9999
ACCEPT  tcp  --  anywhere  anywhere  tcp spt:9999

What if you remove these from your local dash.conf and use masternode.conf instead?
Code:
masternode=1
masternodeprivkey=<mykey>
masternodeaddr=[xxxx:xxx::xx:xxxx::xxx]:9999

Shouldn't make any difference, but I'll try this when I have some more time next weekend.

The main problem is that the wallet tries to open a connection to the Masternodes IPv6 address directly, which it can't in an IPv4 network. Maybe I can temporarily start the Masternode with its IPv4 address enabled and once it's activated (and broadcasted to the Dash-network) stop it, remove the IPv4 from the config and start the dashd with IPv6-only again. Not convenient when you have several Masternodes, but worth a try...

Thanks for your help
icon_thumbsup.gif
:smile:
 
Last edited by a moderator:
The main problem is that the wallet tries to open a connection to the Masternodes IPv6 address directly, which it can't in an IPv4 network.

But that's what's confusing me... I have never had a home ISP that supports ipv6, but my nodes get activated just fine. From my home connection I've tried ipv6 pings and telnet -6 too, and no luck. But the masternodes get activated.

Your iptables output above is for ipv4, right? What about running ip6tables -L?

Code:
root@testwp:/etc/network# ip6tables -L | grep 9999
ACCEPT     tcp      anywhere             anywhere             tcp dpt:9999

Is your VPS host network device set up correctly? I have had to add entries to /etc/network/interfaces every time from a new VPS also.

The first IPv6 entry looks like this:

Code:
iface eth0 inet6 static
        address xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
        netmask 64
        gateway xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
        autoconf 0
        dns-nameservers xxxx:xxxx:xxxx::xxxx xxxx:xxxx:xxxx::8888 8.8.8.8

Subsequent ipv6 entries look like this:
Code:
iface eth0 inet6 static
        address xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
        netmask 64

(I use Digital Ocean, so this tutorial is how I got that setup: How To Enable IPv6 for DigitalOcean Droplets)

The fact that I've been able to do what you're not able to, multiple times, leads me to believe that it's not actually trying to connect from your ISP over ipv6. You should be able to activate your IPv6 masternodes from your home even if you only have an ipv4 connection from your home to the internet.
 
I couldn't get my nodes to start without 9998 open. Once started, sure it can be closed. I had server=1 so I could use the dashwhale restarter too. Otherwise, with server=0 you probably don't need 9998 open.
 
What OS is your local wallet on? Windows does some sort of auto IPv4 to IPv6 tunnelling, not sure about Linux or Mac tho.
 
Update: I've build an IPv6 tunnel (through an IPv4 network) with OpenVPN and started my first IPv6-Masternode from my IPv4-network sucessfully. A soon as the next one gets its payment I'll migrate it as well and see how they like each other.

Building the tunnel was a nice exercise in OpenVPN because all of the tutorials I've found where incomplete/out-of-date or just plain wrong :rolleyes:

If wanted I can post a little tutorial when a couple of nodes are migrated and my setup runs stable for a week or so.
 
Did you use iptunnel on your server? Or actually do the tunnel on your local machine? A server side tunnel would be helpful - yes please post.

I still don't think this is needed and probably something on your VPS is setup incorrectly. Maybe ipv6 forwarding needs to be enabled. Or maybe the host needs to setup a specific set of 8 or 16 IPs vs just a /64. Obviously, this is at the limits of my understanding, so I am asking questions.

Does dash ninja show your node having a port open?
 
Did you use iptunnel on your server? Or actually do the tunnel on your local machine? A server side tunnel would be helpful - yes please post.

I use OpenVPN. The remote IPv6 server acts as OpenVPN-server, the local IPv4 client as, ahem, client. I build an IPv4-tunnel between them and route IPv6-in-IPv4 via a virtual sit-device through it.

I still don't think this is needed and probably something on your VPS is setup incorrectly. Maybe ipv6 forwarding needs to be enabled. Or maybe the host needs to setup a specific set of 8 or 16 IPs vs just a /64. Obviously, this is at the limits of my understanding, so I am asking questions.

This wouldn't change anything:
Code:
CActiveMasternode::ManageStatus() - not capable: Could not connect to [xxxx:xxx:xx:xxxx::xxx]:9999

THIS is the problem. When I do a "dash-cli masternode start" it tries to open a socket to the IPv6 address configured in the wallet's dash.conf and produces the log above.
Can't work. Never will. Not in an IPv4 network. Sockets don't lie.
I have now idea how that could ever have worked on your machine. Care to post the "CActiveMasternode::ManageStatus()" lines from your debug-file with debug=1 in the config-file? (this only shows up in debug.log when logging is enabled and you call the "masternode start" command, so better do this on testnet )

Does dash ninja show your node having a port open?

Yep!
 
Just want to share my findings of 'a day in IPv6 land'.

I've added bindip=xx:xx:xx:xx:xx:xx:xx to my dash.conf MN.
Even when dash-cli masternode status reported the ipv6 service address and status Masternode successfully started I wasn't convinced my Masternode was configured correctly because I wrongly based my conclusiuon on dashman and dawhwales information.

Dashman status keept saying that the dashd port wasn't open and an even more obscure message that dashninja api was down.

After some digging in the Dashman code I've found the first problem:
I've configure 2 IPv6 address on my VPS (dreaming of a second MN). Dashman get the public IPv4 and/or IPv6 address from a curl call to
http://icanhazip.com/. This returned my second IPv6 address but my MN was bind to the my first IPv6. So Dashman doesn't find the open 9999 port.

Removing my second IPv6 address let me discovered the second problem in the Dashman code :
It call to
Code:
https://dashninja.pl/api/masternodes?ips=\[\"${MASTERNODE_BIND_IP}:9999
but the dashnija code parse ips as ip:port.
As the ipv6 address contains, a lot of, semicolons the call doesn't return the information that dashman needs.

moocowmoo : I really love your tool! I've even reverted back to ipv4 so I can keep using dashman.
I don't need ipv6 at the moment but I hope this information is usefull to you.


The problem with dashwales is that it keeps reporting on the website "Error: Dashwhale push script can't connect to your masternode by RPC. Your DASH masternode is probably crashed!"
If I run dwupdate manual it reports "Update status: ok". The website will the also reports thumps up but if fails again at the next scheduled check.
Not sure what's the cause.
 
I've configure 2 IPv6 address on my VPS (dreaming of a second MN). Dashman get the public IPv4 and/or IPv6 address from a curl call to
http://icanhazip.com/. This returned my second IPv6 address but my MN was bind to the my first IPv6. So Dashman doesn't find the open 9999 port.

Add "externalip=<yourIPv6IP>" (without brackets, without port) to your Masternode's dash.conf.

Without this dashd tries to find out its own IP itself, and in an multi-IP system sometimes fails.
 
Back
Top