• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

12.1 Testnet Testing Phase Two Ignition

Status
Not open for further replies.
Get y'er hot fresh packages!
A Dash on Fedora, CentOS, RHEL Update.

https://github.com/taw00/dashcore-rpm

Been continuing to develop and package the dashcore suite of RPMs as always, but there have been changes...
  • refactored the contrib tarball -- it makes much more sense now. And I added the exploded version of it in the github repo
  • polished the desktop icons and descriptor file for linux machines to meet freedesktop standards -- hicolor, and HighContrast, etc.
  • fixed and fully tested dashd running as a system service as system user dash -- this will lead us towards MUCH more secure nodes in the future
  • created firewalld service definitions, so that you can nicely add rules based on dashcore-node and dashcore-node-testnet rather than the so very gauche 9999 and 19999. ;) Polish, my friends!
  • fixed some requires and file permissions, etc.
  • Oh. And I finally packaged Sentinel (see link at the top). Sentinel is very inflexible about how it is positioned and leveraged on the filesystem and between users, so, right now, as delivered via RPM, it is ideally suited for the systemd-driven-dashd use case, but I am looking at ways to break Sentinel out of its peculiar configuration. Since it hasn't been locked down to a branch yet, maybe this will be improved in the next two weeks.
  • And yes, ideally, much of this will eventually get contributed back upstream.

Enjoy folks. Send me a note if you have questions, commentary, criticism, or flattery. :)

-t0dd
 
Last edited:
Is this the current version? Dash Core Daemon version v0.12.1.0-2a43d23 I'm pretty sure I remember updating yesterday... :p

Otherwise my MNs seem to be running OK for quite some time now (at least 36 hours :) )
 
I can't confirm network stability with version d879702. All my masternodes - updated, restarted and reindexed yesterday - are stuck. Some at block 140443, some at 140448 and some (including windows wallet) at 140485.
 
Thank you - those commands are working - MNs syncing again.
Is this just a testnet issue ?

It is a issue which is special to testnet: We are running testnet on a network

- with only 5-10 CPU cores mining hashes
- running 50+ masternodes on insufficient hardware (CPU/RAM)
- floodding the network with twice as much transactions as Bitcoin is processing currently (normal tx and instant send) and
- having 240 times more superblock than mainnet will ever see

to harden the software :)

The nodes being stuck on particular blocks is a direct consequence of this "overloaded, underpowered" network, What actually happens is that nodes are not able to process message messages in time due to CPU overload or stalled network.
 
Well, thanks for the detailed explanation. Although my MNs should have enough ressorces, it's hard for them to keep up with testnet these days.
Will try to start a couple of more powerfull servers next.
 
I'm still a bit concerned about a bandwidth waste attack vector... With DAPI we're going to be moving so much data that a true DDoS wouldn't be needed. Just suck so much bandwidth that the quotas are tripped, and large swaths of the MN network go offline because the host shuts it down... Defeating storage integrity with this is entirely feasible. Even temporary disruption could cause permanent issues.
 
Overhauled the v12.1 Masternode Guide for Fedora, CentOS, and Red Hat Enterprise Linux
https://github.com/taw00/dashcore-rpm/tree/master/documentation

  • The documents are modular instead of one giant document.
  • Installation, because everything is packaged is trivial from the OS on up.
  • Configuration assumes you want to run the masternode as a systemd service (as it really was designed to be).
  • Firewall configuration and Fail2ban instruction covered
  • Sentinel packaged and auto-installed.
Nearly ready for Feb 5! :)
 
I'm still a bit concerned about a bandwidth waste attack vector... With DAPI we're going to be moving so much data that a true DDoS wouldn't be needed. Just suck so much bandwidth that the quotas are tripped, and large swaths of the MN network go offline because the host shuts it down... Defeating storage integrity with this is entirely feasible. Even temporary disruption could cause permanent issues.
@flare can you please comment on this?? Could this be a problem? I would like to know other opinions because it sure sounds like a good point (depending on how much Data will be incoming :) Thanks :)
 
Most MNs are stored on big VPS providers and these days you usually get 1Tb+ of bandwidth thrown in per month. That's a LOT of data to attempt to generate on the network. There is certainly a possibility of an attack vector but as long as there is data validation on field entries within the database structure, this should ensure that data formats and field length are kept to appropriate standards.. the DAPI and DashDrive would have to start accepting data such as images/video in order to run up this sort of bandwidth and/or an attacker would have to find a way of recursively querying the DAPI without consequence or economic cost, in either case that may be enought to trip 50%+ of the current MNs in extreme circumstances. I don't believe that accepting such storage intensive forms of data is part of the current Evo architecture at present? My understanding is that it'll be purely restricted to alphanumeric input/output... likewise with DAPI traffic... That's not to say that huge amounts of bandwidth can't be consumed by an alphanumeric based attack... It's just whether such an attack would be economically feasible? Based on data from my MN farm I can see an average bandwidth utilisation of 854Mb per MN per day. That would mean we would have to see a 116996% increase in traffic overnight in order to knock out the majority of the MN network... Very unlikely, but certainly something for the Core Team to consider (if they haven't already).

Interesting point @camosoul let's see what some of the more technical peeps have to say.

Walter
 
Last edited:
It seems that the sledge-hammer method of preventing blockchain spam, just adding a fee, would be punitive to those simply wanting updated data. For example, any vendor, online or retail, querying for up-to-date date pending a transaction to be, would have to pay said fee even if the customer buggered out or abandoned the cart. In some cases, a wake-up-and-poll that accelerates repeated, multiple pings in the time leading up tom, and during a transaction, could incur expense in an almost false-positive model that would be indistinguishable from an attack on the same metrics.

I'm just dreaming up the problem. Maybe it's an over-thought non-problem. But, the reality of simply spamming large shards off the network using pipe limits seems a concern needing addressed. DAPI will fail if DashDrive can be broken. This makes the very existence of DASH dependent on the resilience of DashDrive. Clients will be dependent upon DAPI returning data from DashDrive on every little thing, even the blockchain... Knowing DAPI's nature, there's really no way to use conventional means normally applied to mitigate such a thing. If DashDrive is compromised, it all falls down.

My pizza box comes with 20TB, and I can upgrade to unmetered 100Mbp/s for a fee of more than the whole machine... Last month I used only 11GB. So, that's a drop in the bucket. But, what we're running now in no way resembles DAPI, and that data is truly useless. This doesn't yet even consider shard propagation or shifts in topology dynamically assumed by organically moving goalposts... The resilience factor itself could become cumbersome. The sharding model has to be geometric... One could simply bring nodes on and offline to change the node count, and push resilience refactoring around for no reason, just to waste the pipe....

I'd like to know more about how this is being handled. It's my observation that 12.1 will be a limited rollout of some of these functions such that more real-world use data can be gathered upon which these decisions would be made. There's just not enough known to make the choices in the current situation, and nothing like this has ever been done which could provide useful comparison data. Clouds as we know them don't work like this. Storj has such a low utilization that it's not useful. I'd like to know what they're keeping an eye on, specifically, and why. Just cuz. Keeping finger on pulse...

Simply moving to unmetered plans could be sufficient, with the 3rd tier quorum response time being the gatekeeper by default botleneck. But few MNOs are running their own dedicated boxes... You can't usually get unmetered pipes for a single VPS... This could drive even more unhealthy consolidation to MN services that shouldn't exist in the first place... There needs to be a paradoxical element added, a la more hashpower causes diff to increase.

I'm sure glad this isn't up for a vote by the MNs... I'm willing to bet 95% of them don't even understand this topic... Hell, I barely have enough clue.
 
Last edited:
again, @eduffield @flare @UdjinM6 @moocowmoo @t0dd Can anyone speak about what Camo is saying above?

I don't know how many api calls you're planning on servicing per second, etc... But have you compared what we're doing with other services like this? Are there any other services like our DAPI? LOL How does NASDAQ do it?

Anyway, if we don't hear from the boys, @camosoul, I think, whatever it will require will be affordable for the MN to implement because heavy use ought to equal higher price, no? But it is a good question and I'd like to know the guys are thinking on this, and that they believe it's do-able (not that they never thought on this!)

We'll probably have to see how it goes as we build this system and test it out and add functionality....?
 
again, @eduffield @flare @UdjinM6 @moocowmoo @t0dd Can anyone speak about what Camo is saying above?

I don't know how many api calls you're planning on servicing per second, etc... But have you compared what we're doing with other services like this? Are there any other services like our DAPI? LOL How does NASDAQ do it?

Anyway, if we don't hear from the boys, @camosoul, I think, whatever it will require will be affordable for the MN to implement because heavy use ought to equal higher price, no? But it is a good question and I'd like to know the guys are thinking on this, and that they believe it's do-able (not that they never thought on this!)

We'll probably have to see how it goes as we build this system and test it out and add functionality....?
I think they're a little busy right now. I'm not demanding immediate response or anything... This is the sort of thing that would/should be on every MNOs mind, were they smart enough to comprehend it...

Just want to know how these things are being handled/mitigated. When you guys get the time...
 
I expect that running a MN in the not too distant future (3-5 years) will become increasingly complex as well as resource heavy. It'll probably cost us hundreds to thousands of dollars a month (well, later it'll cost thousands) In fact, that might be good because we'll probably have to get out of the common service providers and either create our own server centers or go to very specialized servers. Whatever happens, we *should* earn enough to pay for that.

But yah, I'll stop bugging the boys. Although I'd like to confirm if any of the previously talked about features are indeed going to make it into 12.1, like automated spork trigger (no longer requiring human intervention). And I'd like to see a set up where, when the "rules" are relaxed during a spork, that only the previous version and the new version of the software is acceptable. Not have it so that miners can haul off with all the coins, using 3 year old wallets, until consensus is achieved. This is getting costly. I really hope this is the last time we're exposed to this.

Also, one thing that I don't feel clear on is, are mixing requests already going through the DAPI? Are masternodes "blinded" with this version yet? Thanks to anyone that can answer!!!
 
It is no longer possible to InstaSend funds to an address in your own wallet ?

Updated my testnet clients yesterday and some of my tests broke, I was not seeing InstaSend messages on the network for transaction that move funds inside the clients wallet. The fee for InstSend is charged but the transaction is not confirmed as an InstaSend and requires the usual 6 blocks in the qt-client. If this is the behavior you want for internal transaction, then why charge the full InstaSend fee ?

And the transaction log in dash-qt is confusing now, old version logs "Payment to your self", new version logs "PrivateSend Collateral Payment", ether a bug or a very confusing message because I'm only checking the InstaSend check box ?

Edit: Extra trivia: My PrivateSend balance in the wallet is 0.00 and still it logs the collateral when doing InstaSend.
 
Last edited:
Status
Not open for further replies.
Back
Top