• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

IPV6 addresses

FYI, I couldn't activate a masternode without the port 9998 forwarded. So I think you need the RPC and port forwarding to make it all work with more than one node on a server.

I suppose it is possible that you could run a hot node without RPC, but that is risky.
 
FYI, I couldn't activate a masternode without the port 9998 forwarded. So I think you need the RPC and port forwarding to make it all work with more than one node on a server.

I suppose it is possible that you could run a hot node without RPC, but that is risky.

Just did it for a week on testnet. I specified a different port for RPC. Port forwarding is not required.
 
Finally got it working after changing the rpc port on the second masternode.

if you are getting this error:
Code:
Error: An error occurred while setting up the RPC address 0.0.0.0 port 9998 for listening: bind: Address already in use
just add this to dash.conf:
Code:
rpcport=9997

The masternode started successfully without even specifying the port anywhere locally. What is the rpc even used for?
 
I believe the rpcport is what dashd listens for when doing a remote start or remote mining to the wallet.

Only one rpcport per server can be in use at the same time. So running two nodes requires the 2nd node to have a different rpcport.

I think you can get around port forwarding by doing this. Change the rpcport to be different on each node. Then start one at a time to remote start. Once remote started individually, they can hot start if started again within 60 minutes.

If you want to have them running at the same time and receive start commands you need to port forward the 9998 on each IP to different internal ports. iptables is what I use with remote servers with ipv4. ip6tables is used for ipv6 and only available in ubuntu 15 or similar latest versions.
 
I believe the rpcport is what dashd listens for when doing a remote start or remote mining to the wallet.

Only one rpcport per server can be in use at the same time. So running two nodes requires the 2nd node to have a different rpcport.

I think you can get around port forwarding by doing this. Change the rpcport to be different on each node. Then start one at a time to remote start. Once remote started individually, they can hot start if started again within 60 minutes.

If you want to have them running at the same time and receive start commands you need to port forward the 9998 on each IP to different internal ports. iptables is what I use with remote servers with ipv4. ip6tables is used for ipv6 and only available in ubuntu 15 or similar latest versions.
But i successfully started 2 nodes without port forwarding or having to specify the port on my local computer.
 
I believe the rpcport is what dashd listens for when doing a remote start or remote mining to the wallet.

Only one rpcport per server can be in use at the same time. So running two nodes requires the 2nd node to have a different rpcport.

I think you can get around port forwarding by doing this. Change the rpcport to be different on each node. Then start one at a time to remote start. Once remote started individually, they can hot start if started again within 60 minutes.

If you want to have them running at the same time and receive start commands you need to port forward the 9998 on each IP to different internal ports. iptables is what I use with remote servers with ipv4. ip6tables is used for ipv6 and only available in ubuntu 15 or similar latest versions.

The remote-start of a masternode hot/cold wallet setup doesn't need the JSON-RPC interface at all. The remote-start command is sent to the masternode at the Dash protocol level through the dashd port (9999).

The rpc interface is used for locally managing your MN instance, e.g. running "stop" or "getinfo", "mnbudget", things like that. Anything that you can do on the debug console, you can do with the JSON-RPC interface, and the dash-cli command just uses that. But it's not required for the daemon to run, nor for masternoding.
 
The remote-start of a masternode hot/cold wallet setup doesn't need the JSON-RPC interface at all. The remote-start command is sent to the masternode at the Dash protocol level through the dashd port (9999).

The rpc interface is used for locally managing your MN instance, e.g. running "stop" or "getinfo", "mnbudget", things like that. Anything that you can do on the debug console, you can do with the JSON-RPC interface, and the dash-cli command just uses that. But it's not required for the daemon to run, nor for masternoding.

So how do you disable the JSON-RPC? Do you just leave the rpc-port=9998 out of the config?
 
Yep! It works on testnet.

So, in summary: IPv4 not required for anything really. No tunneling, none of that.

1. Just have separate IPv6 addresses setup correctly on your host.
2. Bind to a certain IPv6 address for each dashd process, e.g.:

Code:
bind=[address1]:9999

...and in a different config file for a different process:

Code:
bind=[address2]:9999

... and that's it. Set up your local masternode.conf correctly, ensure the port is open for IPv6 on your firewall and you should be good. At least, it worked for me on testnet.

One question: I normally start my Masternodes from home (with IPv4-address:9999 in the configuration file) with a cold wallet.
My ISP doesn't yet offer IPv6, so I most probably can't simply put an IPv6-address into the local configuration file. How do you start the Masternode(s)?

What I will try once I've moved the first Masternodes to an IPv6 server (which also has one single IPv4-address) is to activated them via IPv4:9999, IPv4:9998, IPv4:9997 and so on and pray that they connect via the (there) configured IPv6 address to the network.

Anyone tried this already?
 
One question: I normally start my Masternodes from home (with IPv4-address:9999 in the configuration file) with a cold wallet.
My ISP doesn't yet offer IPv6, so I most probably can't simply put an IPv6-address into the local configuration file. How do you start the Masternode(s)?

What I will try once I've moved the first Masternodes to an IPv6 server (which also has one single IPv4-address) is to activated them via IPv4:9999, IPv4:9998, IPv4:9997 and so on and pray that they connect via the (there) configured IPv6 address to the network.

Anyone tried this already?
You can run mainnet masternode only on port 9999.

EDIT: and testnet on any port but 9999
 
One question: I normally start my Masternodes from home (with IPv4-address:9999 in the configuration file) with a cold wallet.
My ISP doesn't yet offer IPv6, so I most probably can't simply put an IPv6-address into the local configuration file. How do you start the Masternode(s)?

I think it actually sends the command over the MN network to start it, right? I might be completely wrong, but my (purely) guess is that you could still put an IPv6 address into the local MN config file and even though your ISP doesn't support it, it wouldn't matter, b/c it would send the "mn_start" command over the wire (via IPv4), but then the MN network would ensure that the start command got to your IPv6 server via ipv6.

Please correct me if my understanding is off on this part, which it very well may be.
 
I think it actually sends the command over the MN network to start it, right? I might be completely wrong, but my (purely) guess is that you could still put an IPv6 address into the local MN config file and even though your ISP doesn't support it, it wouldn't matter, b/c it would send the "mn_start" command over the wire (via IPv4), but then the MN network would ensure that the start command got to your IPv6 server via ipv6.

Please correct me if my understanding is off on this part, which it very well may be.

I'll get a server with an /64 IPv6 subnet in the near future and will test this.

And if it doesn't work investigate why it doesn't work and implement what you suggested above.

And make UdjinM6 happy by breaking lots of things on my way :grin:
 
One question: I normally start my Masternodes from home (with IPv4-address:9999 in the configuration file) with a cold wallet.
My ISP doesn't yet offer IPv6, so I most probably can't simply put an IPv6-address into the local configuration file. How do you start the Masternode(s)?

What I will try once I've moved the first Masternodes to an IPv6 server (which also has one single IPv4-address) is to activated them via IPv4:9999, IPv4:9998, IPv4:9997 and so on and pray that they connect via the (there) configured IPv6 address to the network.

Anyone tried this already?
#1. You don't need IPV6 at home to start a remove server. Just put the ipv6 in with the right format [xxxx:xxxx::xxxx:xxxx]:9999.
#2. Like udjinM6 said - all WAN ports need to be 9999
#3. If you have multiple IPV4s on the same server. Forward the wan 9999 and 9998 ports to a different ports on the machine on the primary IP. Specify the port, rpcport, and bind address in your dash.conf. Iptables is your friend.
#4. If you want to forward IPV6s you need Ubuntu15. But you don't need to forward ports, you can start nodes one at a time without any others running. Once they are started you can run them at the same time and they will hot start themselves. Otherwise, try your luck at forwarding with ip6tables.
 
First tests don't look good.
I'm (still) using dash.conf only, but that shouldn't matter:

dash.conf of Masternode:
bind=[xxxx:xxx::xx:xxxx::xxx]:9999
listen=1
server=1
daemon=1
externalip=xxxx:xxx:xx:xxxx::xxx
masternode=1
masternodeprivkey=<mykey>

and debug.log says:
Bound to [xxxx:xxx:xx:xxxx::xxx]:9999
AddLocal([xxxx:xxx:xx:xxxx::xxx]:9999,4)

Looks good on the hot side.

dash.conf of local wallet:
masternode=1
masternodeprivkey=<mykey>
masternodeaddr=[xxxx:xxx::xx:xxxx::xxx]:9999

But all I get is "Not capable masternode: Could not connect to [xxxx:xxx::xx:xxxx::xxx]:9999"

I'll try the same in debug-mode to get the real reason.
If I didn't miss anything obvious it looks like starting an IPv6-Masternode from an IPv4-wallet does not work.

Edit:
Yep, it indeed tries to connect via IPv6, and since my ISP is IPv4 only it fails:
Code:
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - Checking inbound connection to '[xxxx:xxx:xx:xxxx::xxx]:9999'
2016-03-22 22:40:52 connect() to [xxxx:xxx:xx:xxxx::xxx]:9999 failed: Network is unreachable (101)
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - not capable: Could not connect to [xxxx:xxx:xx:xxxx::xxx]:9999

I think it's time to build an IPv6 tunnel next weekend...
 
Last edited by a moderator:
First tests don't look good.
I'm (still) using dash.conf only, but that shouldn't matter:

dash.conf of Masternode:
bind=[xxxx:xxx::xx:xxxx::xxx]:9999
listen=1
server=1
daemon=1
externalip=xxxx:xxx:xx:xxxx::xxx
masternode=1
masternodeprivkey=<mykey>

and debug.log says:
Bound to [xxxx:xxx:xx:xxxx::xxx]:9999
AddLocal([xxxx:xxx:xx:xxxx::xxx]:9999,4)

Looks good on the hot side.

dash.conf of local wallet:
masternode=1
masternodeprivkey=<mykey>
masternodeaddr=[xxxx:xxx::xx:xxxx::xxx]:9999

But all I get is "Not capable masternode: Could not connect to [xxxx:xxx::xx:xxxx::xxx]:9999"

I'll try the same in debug-mode to get the real reason.
If I didn't miss anything obvious it looks like starting an IPv6-Masternode from an IPv4-wallet does not work.

Edit:
Yep, it indeed tries to connect via IPv6, and since my ISP is IPv4 only it fails:
Code:
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - Checking inbound connection to '[xxxx:xxx:xx:xxxx::xxx]:9999'
2016-03-22 22:40:52 connect() to [xxxx:xxx:xx:xxxx::xxx]:9999 failed: Network is unreachable (101)
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - not capable: Could not connect to [xxxx:xxx:xx:xxxx::xxx]:9999

I think it's time to build an IPv6 tunnel next weekend...
I doubt it is your ipv4 only connection. Let's try to fix your config.

Try server=0 to disable the RPC commands. Otherwise make sure both 9999 and 9998 are open. You can't have the same ports with multiple IPV4 nodes - IPV6 you can get away with it starting one at a time.
rpcport=9998
port=9999
You need brackets around your external IP. [xxxx:xxx::xx:xxxx::xxx]
I have the 9999 on the external IP instead of bind IP - like this:
bindip=[xxx:xxx::xxxx:xxxx]
externalip=[xxxx:xxx::xxxx:xxxx]:9999

Also I only use the masternode.conf setup. I thought there was a problem with the dash.conf files and starting nodes, but maybe it was just that they didn't allow donate addresses....which doesn't matter now anyway.
 
Last edited by a moderator:
First tests don't look good.
I'm (still) using dash.conf only, but that shouldn't matter:

dash.conf of Masternode:
bind=[xxxx:xxx::xx:xxxx::xxx]:9999
listen=1
server=1
daemon=1
externalip=xxxx:xxx:xx:xxxx::xxx
masternode=1
masternodeprivkey=<mykey>

and debug.log says:
Bound to [xxxx:xxx:xx:xxxx::xxx]:9999
AddLocal([xxxx:xxx:xx:xxxx::xxx]:9999,4)

Looks good on the hot side.

dash.conf of local wallet:
masternode=1
masternodeprivkey=<mykey>
masternodeaddr=[xxxx:xxx::xx:xxxx::xxx]:9999

But all I get is "Not capable masternode: Could not connect to [xxxx:xxx::xx:xxxx::xxx]:9999"

I'll try the same in debug-mode to get the real reason.
If I didn't miss anything obvious it looks like starting an IPv6-Masternode from an IPv4-wallet does not work.

Edit:
Yep, it indeed tries to connect via IPv6, and since my ISP is IPv4 only it fails:
Code:
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - Checking inbound connection to '[xxxx:xxx:xx:xxxx::xxx]:9999'
2016-03-22 22:40:52 connect() to [xxxx:xxx:xx:xxxx::xxx]:9999 failed: Network is unreachable (101)
2016-03-22 22:40:52 CActiveMasternode::ManageStatus() - not capable: Could not connect to [xxxx:xxx:xx:xxxx::xxx]:9999

I think it's time to build an IPv6 tunnel next weekend...

How are you hosting your MN? On a VPS somewhere in a data center?

I don't think you need externalip. This is my dash.conf from an MN (all my MN's are ipv6):

Code:
# basic settings
txindex=1
testnet=0
listen=1
daemon=1
logtimestamps=1
maxconnections=256

# server=1 tells darkcoin-QT to accept JSON-RPC commands.
server=1
rpcuser=dashrpc
rpcpassword=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
rpcallowip=127.0.0.1

# masternode settings
masternode=1
masternodeprivkey=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
bind=[xxxx:xxxx:x:xx::xxx:xxxx]:9999

The corresponding line from my masternode.conf (not very helpful, but...):

Code:
mnalias [xxxx:xxxx:x:xx::xxx:xxxx]:9999 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 0


I don't know anything about building tunnels, re-routing, port forwarding, or any of that, but I haven't needed to. My VPS server has IPv6 enabled & I've simply used the above config and opened those ports on there and it worked just fine (but I did have to change the privkey when moving from IPv4 to IPv6).

Is your IPv6 port open on your server?

Code:
$ ufw status
Status: active

To                         Action      From
--                         ------      ----
9999/tcp                   ALLOW       Anywhere
9999/tcp (v6)              ALLOW       Anywhere (v6)


What if you remove these from your local dash.conf and use masternode.conf instead?
Code:
masternode=1
masternodeprivkey=<mykey>
masternodeaddr=[xxxx:xxx::xx:xxxx::xxx]:9999


You shouldn't need to disable RPC, I don't think that should affect this at all.
 
Back
Top