12.1 Testnet Testing Phase Two Ignition

Status
Not open for further replies.

GNULinuxGuy

Member
Jul 22, 2014
112
68
78
Dash Address
XjkXfrYTSvdYe4738DtNVX5XfUz7qU9HnY
My tLP running v0.12.1.0-f81ea67 got stuck on block 138287. Updated it to v0.12.1.0-4b7bd6b and created a fresh working directory. Will provide additional debugging data if it gets stuck again.
 

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,863
1,854
1,283
OK, so I'm stuck on block 138911 with my windows QT wallet. trying to open debug log, if I can find something, I'll post :)

No need, we have a new version I see, so I'm downloading that to see if mixing will work now :)
 

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,863
1,854
1,283
Hummm, I'm 1 block ahead of https://test.explorer.dash.org/chain/tDash, and mixing is going nowhere. Am I on the wrong chain? I've been waiting in queue forever. I guess I'll delete all the dat files in my .dashcore folder and see if that makes things work...

This looks weird, but I have no real idea of what I'm looking at???

Code:
missing masternode entry: 6aba20dac2a630319bf396b3be6fa2d0c6c9623767e5392b8543d15dd5773870-1
2017-01-20 11:52:50 CreateTxLockCandidate -- New Transaction Lock Candidate! txid=801ad48bfa986f7d1bb638a6b38e6dc3c2ef3b04f6e1f9e657718edb4e0637e7
2017-01-20 11:52:50 TXLOCKREQUEST -- Transaction Lock Request: 23.22.36.5:19999 /Dash Core:0.12.1/ : accepted 801ad48bfa986f7d1bb638a6b38e6dc3c2ef3b04f6e1f9e657718edb4e0637e7
2017-01-20 11:52:50 CreateTxLockCandidate -- New Transaction Lock Candidate! txid=74a3318e20aa2a2d0e1047a0272d7420b2f20129b4557fbbbd254914cc83ce4a
2017-01-20 11:52:50 TXLOCKREQUEST -- Transaction Lock Request: [2604:a880:1:20::ecf:2000]:19999 /Dash Core:0.12.1/ : accepted 74a3318e20aa2a2d0e1047a0272d7420b2f20129b4557fbbbd254914cc83ce4a
2017-01-20 11:52:50 tor: Thread interrupt
2017-01-20 11:52:50 torcontrol thread exit
2017-01-20 11:52:50 opencon thread interrupt
2017-01-20 11:52:50 addcon thread interrupt
2017-01-20 11:52:50 msghand thread interrupt
2017-01-20 11:52:50 net thread interrupt
2017-01-20 11:52:50 scheduler thread interrupt
2017-01-20 11:52:50 PrepareShutdown: In progress...
2017-01-20 11:52:51 StopNode()
2017-01-20 11:52:51 Verifying mncache.dat format...
2017-01-20 11:52:51 Loaded info from mncache.dat  44ms
2017-01-20 11:52:51      Masternodes: 114, peers who asked us for Masternode list: 0, peers we asked for Masternode list: 0, entries in Masternode list we asked for: 0, nDsqCount: 8370
2017-01-20 11:52:51 Writting info to mncache.dat...
2017-01-20 11:52:51 Written info to mncache.dat  37ms
2017-01-20 11:52:51      Masternodes: 114, peers who asked us for Masternode list: 0, peers we asked for Masternode list: 0, entries in Masternode list we asked for: 62, nDsqCount: 8564
2017-01-20 11:52:51 mncache.dat dump finished  83ms
2017-01-20 11:52:51 Verifying mnpayments.dat format...
2017-01-20 11:52:52 Loaded info from mnpayments.dat  274ms
2017-01-20 11:52:52      Votes: 51459, Blocks: 4924
2017-01-20 11:52:52 Writting info to mnpayments.dat...
2017-01-20 11:52:52 Written info to mnpayments.dat  111ms
2017-01-20 11:52:52      Votes: 50106, Blocks: 4776
2017-01-20 11:52:52 mnpayments.dat dump finished  395ms
2017-01-20 11:52:52 Verifying governance.dat format...
2017-01-20 11:52:55 Loaded info from governance.dat  2757ms
2017-01-20 11:52:55      Governance Objects: 1328 (Proposals: 1258, Triggers: 53, Watchdogs: 17, Other: 0; Seen: 2375), Votes: 0
2017-01-20 11:52:55 Writting info to governance.dat...
2017-01-20 11:52:56 Written info to governance.dat  1049ms
2017-01-20 11:52:56      Governance Objects: 1304 (Proposals: 1258, Triggers: 43, Watchdogs: 3, Other: 0; Seen: 2401), Votes: 225223
2017-01-20 11:52:56 governance.dat dump finished  3860ms
2017-01-20 11:52:56 Verifying netfulfilled.dat format...
2017-01-20 11:52:56 Loaded info from netfulfilled.dat  0ms
2017-01-20 11:52:56      Nodes with fulfilled requests: 12
2017-01-20 11:52:56 Writting info to netfulfilled.dat...
2017-01-20 11:52:56 Written info to netfulfilled.dat  1ms
2017-01-20 11:52:56      Nodes with fulfilled requests: 9
2017-01-20 11:52:56 netfulfilled.dat dump finished  2ms
2017-01-20 11:52:58 Shutdown: done
 

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,863
1,854
1,283
Ugh, after going to bed, I lost internet :( So I don't know if it turned off right before I went to bed or what, but no new mixing happened.
 

Defacto

Member
Aug 20, 2015
50
30
58
Warsaw
Dash Address
Xj8PUS4r9vk5y8BvoovMoRpbhHXuW6uj5o
Latest build of 12.1 (yesterday - no new changes since today) from sources hungs. Three masternodes on testnet. Same behaviour. Here some more info.

1. Last log lines:
testmn01:
Code:
2017-01-20 05:17:55 CActiveMasternode::ManageStateLocal -- Update Masternode List
2017-01-20 05:17:55 CMasternodeMan::UpdateMasternodeList -- masternode=06069e73139bbcfabb9371c8683190566988bf019deaeb090413f2bee0786613-1  addr=83.1.99.1:19999
2017-01-20 05:17:55 CActiveMasternode::ManageStateRemote -- NOT_CAPABLE: Masternode in NEW_START_REQUIRED state
testmn02:
Code:
2017-01-20 12:41:35 CActiveMasternode::ManageStateInitial -- Checking inbound connection to '83.1.99.2:19999'
2017-01-20 12:41:35 CActiveMasternode::ManageStateRemote -- NOT_CAPABLE: Masternode not in masternode list
2017-01-20 12:41:35 CActiveMasternode::ManageStateLocal -- Update Masternode List
2017-01-20 12:41:35 CMasternodeMan::UpdateMasternodeList -- masternode=06069e73139bbcfabb9371c8683190566988bf019deaeb090413f2bee0786613-1  addr=83.1.99.2:19999
2017-01-20 12:41:35 CreateTxLockCandidate -- New Transaction Lock Candidate! txid=23a238b7c787789830681e3f41e2f902f2aa45f4169a6598d2bf89c1bdbdd40e
2017-01-20 12:41:35 TXLOCKREQUEST -- Transaction Lock Request: 91.121.86.30:54144 /Dash Core:0.12.1/ : accepted 23a238b7c787789830681e3f41e2f902f2aa45f4169a6598d2bf89c1bdbdd40e
testmn03:
Code:
2017-01-20 11:28:05 CreateTxLockCandidate -- New Transaction Lock Candidate! txid=8dd5e1a37cc3bcafa62f44631fa6ea5bca64b1b99407220c28a1d9de21a49cf4
2017-01-20 11:28:05 TXLOCKREQUEST -- Transaction Lock Request: 82.196.0.241:19999 /Dash Core:0.12.1(bitcore)/ : accepted 8dd5e1a37cc3bcafa62f44631fa6ea5bca64b1b99407220c28a1d9de21a49cf4
2017-01-20 11:28:05 CreateTxLockCandidate -- New Transaction Lock Candidate! txid=af57aaf548e22039905fbdf74723aeb3fc35654d6580c1b0339b24103123bc00
2017-01-20 11:28:05 TXLOCKREQUEST -- Transaction Lock Request: 176.31.145.16:33238 /Dash Core:0.12.1/ : accepted af57aaf548e22039905fbdf74723aeb3fc35654d6580c1b0339b24103123bc00
So it looks that it can have something to do with ManageState or Transaction Locks.

2. ktrace output (on all mn that same). Looks as some kind of loop.
Code:
61729 dash-wallet RET   _umtx_op -1 errno 60 Operation timed out
 61729 dash-wallet CALL  clock_gettime(0,0x7fffdcfe5808)
 61729 dash-wallet RET   clock_gettime 0
 61729 dash-wallet CALL  _umtx_op(0x80237e248,UMTX_OP_WAIT_UINT_PRIVATE,0,0x18,0x7fffdcfe5688)
 61729 dashd    RET   nanosleep 0
 61729 dashd    CALL  nanosleep(0x7fffffffdeb8,0)
 61729 dashd    RET   nanosleep 0
 61729 dashd    CALL  nanosleep(0x7fffffffdeb8,0)
 61729 dashd    RET   nanosleep 0
 61729 dashd    CALL  nanosleep(0x7fffffffdeb8,0)
 61729 dash-wallet RET   _umtx_op -1 errno 60 Operation timed out
 61729 dash-wallet CALL  clock_gettime(0,0x7fffdcfe5808)
 61729 dash-wallet RET   clock_gettime 0
 61729 dash-wallet CALL  _umtx_op(0x80237e248,UMTX_OP_WAIT_UINT_PRIVATE,0,0x18,0x7fffdcfe5688)

And... it looks that 12.1 still have some stability issues :-/
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
Latest build of 12.1 (yesterday - no new changes since today) from sources hungs. Three masternodes on testnet. Same behaviour. Here some more info.

1. Last log lines:
testmn01:
Code:
2017-01-20 05:17:55 CActiveMasternode::ManageStateLocal -- Update Masternode List
2017-01-20 05:17:55 CMasternodeMan::UpdateMasternodeList -- masternode=06069e73139bbcfabb9371c8683190566988bf019deaeb090413f2bee0786613-1  addr=83.1.99.1:19999
2017-01-20 05:17:55 CActiveMasternode::ManageStateRemote -- NOT_CAPABLE: Masternode in NEW_START_REQUIRED state
testmn02:
Code:
2017-01-20 12:41:35 CActiveMasternode::ManageStateInitial -- Checking inbound connection to '83.1.99.2:19999'
2017-01-20 12:41:35 CActiveMasternode::ManageStateRemote -- NOT_CAPABLE: Masternode not in masternode list
2017-01-20 12:41:35 CActiveMasternode::ManageStateLocal -- Update Masternode List
2017-01-20 12:41:35 CMasternodeMan::UpdateMasternodeList -- masternode=06069e73139bbcfabb9371c8683190566988bf019deaeb090413f2bee0786613-1  addr=83.1.99.2:19999
2017-01-20 12:41:35 CreateTxLockCandidate -- New Transaction Lock Candidate! txid=23a238b7c787789830681e3f41e2f902f2aa45f4169a6598d2bf89c1bdbdd40e
2017-01-20 12:41:35 TXLOCKREQUEST -- Transaction Lock Request: 91.121.86.30:54144 /Dash Core:0.12.1/ : accepted 23a238b7c787789830681e3f41e2f902f2aa45f4169a6598d2bf89c1bdbdd40e
testmn03:
Code:
2017-01-20 11:28:05 CreateTxLockCandidate -- New Transaction Lock Candidate! txid=8dd5e1a37cc3bcafa62f44631fa6ea5bca64b1b99407220c28a1d9de21a49cf4
2017-01-20 11:28:05 TXLOCKREQUEST -- Transaction Lock Request: 82.196.0.241:19999 /Dash Core:0.12.1(bitcore)/ : accepted 8dd5e1a37cc3bcafa62f44631fa6ea5bca64b1b99407220c28a1d9de21a49cf4
2017-01-20 11:28:05 CreateTxLockCandidate -- New Transaction Lock Candidate! txid=af57aaf548e22039905fbdf74723aeb3fc35654d6580c1b0339b24103123bc00
2017-01-20 11:28:05 TXLOCKREQUEST -- Transaction Lock Request: 176.31.145.16:33238 /Dash Core:0.12.1/ : accepted af57aaf548e22039905fbdf74723aeb3fc35654d6580c1b0339b24103123bc00
So it looks that it can have something to do with ManageState or Transaction Locks.

2. ktrace output (on all mn that same). Looks as some kind of loop.
Code:
61729 dash-wallet RET   _umtx_op -1 errno 60 Operation timed out
 61729 dash-wallet CALL  clock_gettime(0,0x7fffdcfe5808)
 61729 dash-wallet RET   clock_gettime 0
 61729 dash-wallet CALL  _umtx_op(0x80237e248,UMTX_OP_WAIT_UINT_PRIVATE,0,0x18,0x7fffdcfe5688)
 61729 dashd    RET   nanosleep 0
 61729 dashd    CALL  nanosleep(0x7fffffffdeb8,0)
 61729 dashd    RET   nanosleep 0
 61729 dashd    CALL  nanosleep(0x7fffffffdeb8,0)
 61729 dashd    RET   nanosleep 0
 61729 dashd    CALL  nanosleep(0x7fffffffdeb8,0)
 61729 dash-wallet RET   _umtx_op -1 errno 60 Operation timed out
 61729 dash-wallet CALL  clock_gettime(0,0x7fffdcfe5808)
 61729 dash-wallet RET   clock_gettime 0
 61729 dash-wallet CALL  _umtx_op(0x80237e248,UMTX_OP_WAIT_UINT_PRIVATE,0,0x18,0x7fffdcfe5688)

And... it looks that 12.1 still have some stability issues :-/
Thanks for reporting - just wanted to says that we've hammered testnet with tx/is today at a load of 3x current Bitcoin tx. So your nodes getting stuck is more likely a problem of high cpu load...
 

Defacto

Member
Aug 20, 2015
50
30
58
Warsaw
Dash Address
Xj8PUS4r9vk5y8BvoovMoRpbhHXuW6uj5o
Thanks for reporting - just wanted to says that we've hammered testnet with tx/is today at a load of 3x current Bitcoin tx. So your nodes getting stuck is more likely a problem of high cpu load...
No it isn't. CPU usage is normal. Dashd looping for hours and not responding to rpc commands, etc. Only kill -9 helps ;-)
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
No it isn't. CPU usage is normal. Dashd looping for hours and not responding to rpc commands, etc. Only kill -9 helps ;-)
Mind attaching a gdb instance to one of the stuck nodes and get a stacktrace? Maybe it is a locking issue.
 

Defacto

Member
Aug 20, 2015
50
30
58
Warsaw
Dash Address
Xj8PUS4r9vk5y8BvoovMoRpbhHXuW6uj5o
Mind attaching a gdb instance to one of the stuck nodes and get a stacktrace? Maybe it is a locking issue.
Ok. I just killed all three dashd processes. So probably i need to wait couple hours...
 
  • Like
Reactions: flare

t0dd

Active Member
Mar 21, 2016
150
132
103
keybase.io
Dash Address
XyxQq4qgp9B53QWQgSqSxJb4xddhzk5Zhh
Get y'er hot fresh packages!
A Dash on Fedora, CentOS, RHEL Update.

https://github.com/taw00/dashcore-rpm

Been continuing to develop and package the dashcore suite of RPMs as always, but there have been changes...
  • refactored the contrib tarball -- it makes much more sense now. And I added the exploded version of it in the github repo
  • polished the desktop icons and descriptor file for linux machines to meet freedesktop standards -- hicolor, and HighContrast, etc.
  • fixed and fully tested dashd running as a system service as system user dash -- this will lead us towards MUCH more secure nodes in the future
  • created firewalld service definitions, so that you can nicely add rules based on dashcore-node and dashcore-node-testnet rather than the so very gauche 9999 and 19999. ;) Polish, my friends!
  • fixed some requires and file permissions, etc.
  • Oh. And I finally packaged Sentinel (see link at the top). Sentinel is very inflexible about how it is positioned and leveraged on the filesystem and between users, so, right now, as delivered via RPM, it is ideally suited for the systemd-driven-dashd use case, but I am looking at ways to break Sentinel out of its peculiar configuration. Since it hasn't been locked down to a branch yet, maybe this will be improved in the next two weeks.
  • And yes, ideally, much of this will eventually get contributed back upstream.

Enjoy folks. Send me a note if you have questions, commentary, criticism, or flattery. :)

-t0dd
 
Last edited:

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,863
1,854
1,283
Is this the current version? Dash Core Daemon version v0.12.1.0-2a43d23 I'm pretty sure I remember updating yesterday... :p

Otherwise my MNs seem to be running OK for quite some time now (at least 36 hours :) )
 
  • Like
Reactions: splawik21

Lariondos

Well-known Member
Foundation Member
Apr 8, 2014
89
61
158
I can't confirm network stability with version d879702. All my masternodes - updated, restarted and reindexed yesterday - are stuck. Some at block 140443, some at 140448 and some (including windows wallet) at 140485.
 

Lariondos

Well-known Member
Foundation Member
Apr 8, 2014
89
61
158
Thank you - those commands are working - MNs syncing again.
Is this just a testnet issue ?
 

flare

Administrator
Dash Core Team
Moderator
May 18, 2014
2,286
2,404
1,183
Germany
Thank you - those commands are working - MNs syncing again.
Is this just a testnet issue ?
It is a issue which is special to testnet: We are running testnet on a network

- with only 5-10 CPU cores mining hashes
- running 50+ masternodes on insufficient hardware (CPU/RAM)
- floodding the network with twice as much transactions as Bitcoin is processing currently (normal tx and instant send) and
- having 240 times more superblock than mainnet will ever see

to harden the software :)

The nodes being stuck on particular blocks is a direct consequence of this "overloaded, underpowered" network, What actually happens is that nodes are not able to process message messages in time due to CPU overload or stalled network.
 

Lariondos

Well-known Member
Foundation Member
Apr 8, 2014
89
61
158
Well, thanks for the detailed explanation. Although my MNs should have enough ressorces, it's hard for them to keep up with testnet these days.
Will try to start a couple of more powerfull servers next.
 

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
I'm still a bit concerned about a bandwidth waste attack vector... With DAPI we're going to be moving so much data that a true DDoS wouldn't be needed. Just suck so much bandwidth that the quotas are tripped, and large swaths of the MN network go offline because the host shuts it down... Defeating storage integrity with this is entirely feasible. Even temporary disruption could cause permanent issues.
 

t0dd

Active Member
Mar 21, 2016
150
132
103
keybase.io
Dash Address
XyxQq4qgp9B53QWQgSqSxJb4xddhzk5Zhh
Overhauled the v12.1 Masternode Guide for Fedora, CentOS, and Red Hat Enterprise Linux
https://github.com/taw00/dashcore-rpm/tree/master/documentation

  • The documents are modular instead of one giant document.
  • Installation, because everything is packaged is trivial from the OS on up.
  • Configuration assumes you want to run the masternode as a systemd service (as it really was designed to be).
  • Firewall configuration and Fail2ban instruction covered
  • Sentinel packaged and auto-installed.
Nearly ready for Feb 5! :)
 
  • Like
Reactions: GNULinuxGuy

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,863
1,854
1,283
I'm still a bit concerned about a bandwidth waste attack vector... With DAPI we're going to be moving so much data that a true DDoS wouldn't be needed. Just suck so much bandwidth that the quotas are tripped, and large swaths of the MN network go offline because the host shuts it down... Defeating storage integrity with this is entirely feasible. Even temporary disruption could cause permanent issues.
@flare can you please comment on this?? Could this be a problem? I would like to know other opinions because it sure sounds like a good point (depending on how much Data will be incoming :) Thanks :)
 

Walter

Active Member
Masternode Owner/Operator
Jul 17, 2014
231
201
103
Most MNs are stored on big VPS providers and these days you usually get 1Tb+ of bandwidth thrown in per month. That's a LOT of data to attempt to generate on the network. There is certainly a possibility of an attack vector but as long as there is data validation on field entries within the database structure, this should ensure that data formats and field length are kept to appropriate standards.. the DAPI and DashDrive would have to start accepting data such as images/video in order to run up this sort of bandwidth and/or an attacker would have to find a way of recursively querying the DAPI without consequence or economic cost, in either case that may be enought to trip 50%+ of the current MNs in extreme circumstances. I don't believe that accepting such storage intensive forms of data is part of the current Evo architecture at present? My understanding is that it'll be purely restricted to alphanumeric input/output... likewise with DAPI traffic... That's not to say that huge amounts of bandwidth can't be consumed by an alphanumeric based attack... It's just whether such an attack would be economically feasible? Based on data from my MN farm I can see an average bandwidth utilisation of 854Mb per MN per day. That would mean we would have to see a 116996% increase in traffic overnight in order to knock out the majority of the MN network... Very unlikely, but certainly something for the Core Team to consider (if they haven't already).

Interesting point @camosoul let's see what some of the more technical peeps have to say.

Walter
 
Last edited:
  • Like
Reactions: qwizzie

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
It seems that the sledge-hammer method of preventing blockchain spam, just adding a fee, would be punitive to those simply wanting updated data. For example, any vendor, online or retail, querying for up-to-date date pending a transaction to be, would have to pay said fee even if the customer buggered out or abandoned the cart. In some cases, a wake-up-and-poll that accelerates repeated, multiple pings in the time leading up tom, and during a transaction, could incur expense in an almost false-positive model that would be indistinguishable from an attack on the same metrics.

I'm just dreaming up the problem. Maybe it's an over-thought non-problem. But, the reality of simply spamming large shards off the network using pipe limits seems a concern needing addressed. DAPI will fail if DashDrive can be broken. This makes the very existence of DASH dependent on the resilience of DashDrive. Clients will be dependent upon DAPI returning data from DashDrive on every little thing, even the blockchain... Knowing DAPI's nature, there's really no way to use conventional means normally applied to mitigate such a thing. If DashDrive is compromised, it all falls down.

My pizza box comes with 20TB, and I can upgrade to unmetered 100Mbp/s for a fee of more than the whole machine... Last month I used only 11GB. So, that's a drop in the bucket. But, what we're running now in no way resembles DAPI, and that data is truly useless. This doesn't yet even consider shard propagation or shifts in topology dynamically assumed by organically moving goalposts... The resilience factor itself could become cumbersome. The sharding model has to be geometric... One could simply bring nodes on and offline to change the node count, and push resilience refactoring around for no reason, just to waste the pipe....

I'd like to know more about how this is being handled. It's my observation that 12.1 will be a limited rollout of some of these functions such that more real-world use data can be gathered upon which these decisions would be made. There's just not enough known to make the choices in the current situation, and nothing like this has ever been done which could provide useful comparison data. Clouds as we know them don't work like this. Storj has such a low utilization that it's not useful. I'd like to know what they're keeping an eye on, specifically, and why. Just cuz. Keeping finger on pulse...

Simply moving to unmetered plans could be sufficient, with the 3rd tier quorum response time being the gatekeeper by default botleneck. But few MNOs are running their own dedicated boxes... You can't usually get unmetered pipes for a single VPS... This could drive even more unhealthy consolidation to MN services that shouldn't exist in the first place... There needs to be a paradoxical element added, a la more hashpower causes diff to increase.

I'm sure glad this isn't up for a vote by the MNs... I'm willing to bet 95% of them don't even understand this topic... Hell, I barely have enough clue.
 
Last edited:

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,863
1,854
1,283
again, @eduffield @flare @UdjinM6 @moocowmoo @t0dd Can anyone speak about what Camo is saying above?

I don't know how many api calls you're planning on servicing per second, etc... But have you compared what we're doing with other services like this? Are there any other services like our DAPI? LOL How does NASDAQ do it?

Anyway, if we don't hear from the boys, @camosoul, I think, whatever it will require will be affordable for the MN to implement because heavy use ought to equal higher price, no? But it is a good question and I'd like to know the guys are thinking on this, and that they believe it's do-able (not that they never thought on this!)

We'll probably have to see how it goes as we build this system and test it out and add functionality....?
 

camosoul

Grizzled Member
Sep 19, 2014
2,261
1,130
1,183
again, @eduffield @flare @UdjinM6 @moocowmoo @t0dd Can anyone speak about what Camo is saying above?

I don't know how many api calls you're planning on servicing per second, etc... But have you compared what we're doing with other services like this? Are there any other services like our DAPI? LOL How does NASDAQ do it?

Anyway, if we don't hear from the boys, @camosoul, I think, whatever it will require will be affordable for the MN to implement because heavy use ought to equal higher price, no? But it is a good question and I'd like to know the guys are thinking on this, and that they believe it's do-able (not that they never thought on this!)

We'll probably have to see how it goes as we build this system and test it out and add functionality....?
I think they're a little busy right now. I'm not demanding immediate response or anything... This is the sort of thing that would/should be on every MNOs mind, were they smart enough to comprehend it...

Just want to know how these things are being handled/mitigated. When you guys get the time...
 
  • Like
Reactions: stan.distortion

TanteStefana

Grizzled Member
Foundation Member
Mar 9, 2014
2,863
1,854
1,283
I expect that running a MN in the not too distant future (3-5 years) will become increasingly complex as well as resource heavy. It'll probably cost us hundreds to thousands of dollars a month (well, later it'll cost thousands) In fact, that might be good because we'll probably have to get out of the common service providers and either create our own server centers or go to very specialized servers. Whatever happens, we *should* earn enough to pay for that.

But yah, I'll stop bugging the boys. Although I'd like to confirm if any of the previously talked about features are indeed going to make it into 12.1, like automated spork trigger (no longer requiring human intervention). And I'd like to see a set up where, when the "rules" are relaxed during a spork, that only the previous version and the new version of the software is acceptable. Not have it so that miners can haul off with all the coins, using 3 year old wallets, until consensus is achieved. This is getting costly. I really hope this is the last time we're exposed to this.

Also, one thing that I don't feel clear on is, are mixing requests already going through the DAPI? Are masternodes "blinded" with this version yet? Thanks to anyone that can answer!!!
 

Nitya Sattva

New Member
Nov 21, 2016
31
39
18
40
It is no longer possible to InstaSend funds to an address in your own wallet ?

Updated my testnet clients yesterday and some of my tests broke, I was not seeing InstaSend messages on the network for transaction that move funds inside the clients wallet. The fee for InstSend is charged but the transaction is not confirmed as an InstaSend and requires the usual 6 blocks in the qt-client. If this is the behavior you want for internal transaction, then why charge the full InstaSend fee ?

And the transaction log in dash-qt is confusing now, old version logs "Payment to your self", new version logs "PrivateSend Collateral Payment", ether a bug or a very confusing message because I'm only checking the InstaSend check box ?

Edit: Extra trivia: My PrivateSend balance in the wallet is 0.00 and still it logs the collateral when doing InstaSend.
 
Last edited:
Status
Not open for further replies.