March 11, 2019 4:09 pm

Dash Trust Protector election software has been released!

Dash Trust Protector election software has been released! A full voting system comprised of Vote Collector API, a front end website, and Vote Verification and Tally for gathering votes and electing Dash Trust Protectors. For more info:

https://blog.dash.org/trust-protector-election-software-93ed67c7455b

 

Hi everyone,

I wanted announce the release of our Dash Trust Protector election software.

A couple months ago I was approached and tasked with building a voting system for gathering votes for the Dash Trust Protectors, which would then be audited by a third-party auditor (later determined to be the DashWatch team).

This is done, and I wanted to announce the release of this software, and delve a bit into why I made the decisions that I did which resulted in the tech stack being used.

For one, I had some constraints to take into consideration. Our developers on individual component teams are all busy with Evo work, and didnā€™t have time to spare for this project without delaying progress on Evolution components (e.g Drive, DAPI, DashPay wallet). So, in order to prevent any delay in Evo delivery, I needed to do this myself, balancing my ability to deliver quickly without sacrificing quality.

ā€œWhy not use the existing budgetĀ system?ā€

Itā€™s a valid question, and Iā€™m glad someone might ask. This is actually different from the current budget system in that itā€™s not a simply binary yes/no, and the MN owners who currently delegate or e.g. share their voting keys with MN operators, may not want to delegate or share ability to vote for the Dash Trust Protectors.

Because of this difference and the importance of the Trust Protector position and to ensure that it is MN owners and not e.g. hosting providers or budget vote delegates, we determined that we needed to use the MN collateral key for signing and verifying messages.

High-Level Architecture

With that in mind, letā€™s dig in. The software consists of 3 pieces which are loosely coupled and working together.

1. A backend API which talks to the database.
2. A frontend website which talks to the MNOs and the API.
3. A vote validation and tallying tool which validates (or rejects) votes and tallies them, to be used once the vote period ends on March 31st.

This final piece is actually the responsibility of the 3rd-party auditor, and they are free to use my implementation or roll their own. Itā€™s open-source anyway, so nothing to hide.Ā šŸ˜‰

I wanted to make sure that up front, there werenā€™t a whole lot of restrictions or complicated logic which could lead to bugs in getting votes in the database. Once the vote is over, the votes can easily be filtered and invalid votes discarded, but it would be terrible to have valid MNO votes not show up due to some bug in validation logic.

I think that the natural tendency for most junior software engineers would be to build tight validation into every part of the application. Of course, the more checks that exist, the more potential for bugs, and the less ā€œelegantā€, as I like to use the term.

Itā€™s also useful to remember that the valid masternode list is a moving targetā€Šā€”ā€Šmeaning, it changes as MNs join and leave. The only one that matters is on March 31st, when a single ā€œsnapshotā€ of the MN list will be taken for validation purposes.

Even if a vote is cast with an address that isnā€™t currently an active MN (e.g. might be down for maintenance), as long as that MN collateral address is in the list as of March 31st when the snapshot is taken, the vote should be considered valid. This is also used to ensure that if MN owners were to attempt to ā€œgameā€ the system by moving collateral around and re-casting votes, well, it wouldnā€™t matter. Only MNs valid as of March 31st when the snapshot is taken will be considered.

So we need to be more open at first, and allow a little more than strictly necessary, and later we can tighten down when the tally is done. Disk space is cheap, and our software architected such that a little extra load shouldnā€™t really be noticeable. Instead of a lot of extra logic and code clutter, I think that up-front itā€™s better to be more elegant, and validate the data at the end, which is the real proof.

Remember, invalid votes can be rejected, but valid MNO votes which didnā€™t get cast due to buggy software canā€™t be recovered! So the software is built with a view to keeping the input / DB accepting, with very minimal validation up-front, and we enforce strong validation later.

With that in mind, letā€™s discuss each piece in detail:

1. Vote CollectorĀ : backend JSON API written inĀ Go

https://github.com/dashevo/vote-collector

We needed a database to store the results, and Postgres is one of the best open-source and SQL-compliant offerings. But I also needed a way to talk to it, and an HTTP API (REST or otherwise) is pretty standard.

I chose to use JSON and write my own routes instead of using REST because that didnā€™t really serve my needs for the routes which exist. Due to the simplicity and speed (both in running and speed of development time), I chose to use Go for this.

Go is static typed, compiles to a single binary (like C/C++), has a single code style for the language which is built-in (so no issues with linting or mucking about with syntax config files), has cross-compilation built in, and both compiles and runs very quickly. Since it compiles to a single binary, it doesnā€™t require an interpreter or that Go is installed on the target machine. Iā€™ve heard it described it as ā€œC++ for the 21st centuryā€. And in my assessment, unlike some other tech fads, Go will be around for a while.

So I wrote our vote-collector API in Go. This is the single point of access for the Postgres DB, meaning that all reads & writes go thru the API. Configuration is done via environment variables, Ć  la 12-Factor app.

By the way, if you arenā€™t familiar with the 12-Factor software design methodology, Iā€™d really recommend giving it a read. Itā€™s very short, and reinforces a lot of good software design principles.

As far as auditing the votes, this is also part of this API. I have created a JSON Web Token for the auditors, which grants access to the routes which select votes from the database. The `/allVotes` route is used for auditing and can be used by the 3rd-party auditors to verify that they are able to see any/all votes, including ones they insert themselves. By doing so, and checking this periodically, they are able to verify that votes arenā€™t being excluded from the system.

Another route is the `/validVotes` route, which actually doesnā€™t do validation, but lists the most recent vote per MN collateral address. This is what should be used for the vote tally at the end, as it doesnā€™t display previous votes which are superceded by newer ones. An MN owner can cast a vote multiple times per address, but only the most recent will count.

Iā€™ve used our AWS account to deploy this infrastructure.

The vote-collector deployment is 3 of these processes running behind a single load balancer and using an auto-scaling group to ensure there are always 3 running, which gives plenty of resilience even if the load is heavy.

Next, letā€™s move to the frontend site:

2. Dash Trust Election Vote siteĀ : frontend Static javascript website usingĀ React

https://github.com/dashevo/trust-vote-site

We needed a way for MNOs to cast votes, and also needed a way to automate the vote verification and tallying of them. Otherwise, manual work could take many hours or more, and lots of cut-and-pasting for verifying signatures. This isnā€™t necessary with modern technology, so I devised a way to count votes in an automated fashion.

To do this, we needed 3 things:

* A message with a custom structure which can later be parsed by the tally mechanism
* The MN collateral address (to prove that the signed message belongs to this address, and to later verify that this address exists in the list of masternode addresses)
* The signature, which must be done externally and pasted in

This will also ensure that the format is correct.

The format is actually a string with a prefix and pipe-delimited list of candidate identifiers (which are a candidateā€™s first initial and last name). This identifier is used internally to keep the message short/legible and reduce chance of error by removing things like spaces from the string.

As long as MN owners use this tool in the expected way, their votes should be recorded and in the correct format.

This is a static site which is generated and uploaded to Amazon S3, and deployed via Amazon CloudFront, meaning it should be available globally with next to zero lag (and we have zero servers to maintain or worry about getting hacked).

This describe the existing two pieces of deployed infrastructure, but letā€™s examine the final bit which does the verification and tallying after the election.

3. Vote Verification and TallyĀ : Node.js tool to verify vote validity and tally validĀ votes

https://github.com/dashevo/vote-tally

I probably would have chosen Go for this also, if there were some existing Dash / Bitcoin libraries for signing and validating string messages. But I didnā€™t have time to delve into adding or porting this, along with all the other pieces. Fortunately, this was simple to do in Node.js using only dashcore-lib as the only dependency.

This is technically the responsibility of the 3rd-party auditor to run, but I wanted to make sure everyone knew how the tally was intended to work, and ensure that no invalid assumptions were made about the vote data (e.g. that anything had already been verified).

This expects a JSON list of votes as returned by the `/validVotes` route, and a valid masternode snapshot as return by the dash-cli command `masternodelist json ENABLED` on March 31st.

Once these 2 files are added (see the README for specific locations), the software verifies each vote entry, ensuring that:

* a MN collateral can only vote once
ā€”ā€Šif this condition is not met, the program exits w/an ā€œinvalid datasetā€ message. It is expected that the correct data is generated from the API via the `/validVotes` route to prefix this from ever happening.

* the MN collateral address must be a valid Dash address
* the MN collateral address must exist in the valid MN list
* the given signature is valid for the message + address
* the vote message matches the aforementioned format required for parsing (including message prefix)
* a candidate can only be listed once per MN collateral vote
* ONLY candidates in the valid candidate list as provided by DashWatch are valid

If a vote breaks ANY of these conditions, the entire vote is nullified (with the exception of the first, which stops the program entirely as the entire dataset is considered invalid).

Summary

So thatā€™s the run-down of all the pieces of software for the Trust Protector vote, how they work together, and a little of why I made the technical and architectural decisions that I did, given my constraints. I hope this has been informational and helpful, and Iā€™m happy to answer any questions anyone may have.

If anyone is interested Iā€™d also recommend looking at the README.md for each specific piece, which has basic documentation, including configuration.

Oh, one quick note: Apparently there is some confusion in the Dash community about whether or not software located at https://github.com/dashevo is oursā€Šā€”ā€Šit is. Itā€™s actually pretty common for larger organizations to have multiple GitHub organizations, and there are some good reasons for this. But I will discuss this in more detail next week when I write about our open-sourcing efforts. Stay tuned!Ā šŸ™‚

And please let me know if you have any questions.

Cheers,
Nathan

P.S.

Oh, and the voting site is now live atĀ : https://trustvote.dash.org

Author: Nathan Marley
Original link: https://blog.dash.org/trust-protector-election-software-93ed67c7455b


About the author


tungfa

Communications

tungfa is responsible for social media communications, and posts both original stories and links to news coverage of Dash from around the web.