• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

DashRemix makes Masternode voting easy with unbiased ratings of budget proposals and voting APIs

Well in case you want to also keep track of proposals after they have passed you should work with http://dashwatch.org. That's basically what they are doing. It would probably be easiest if you handle stuff before the proposal is accepted and then if that happens just provide all the necessary info to dashwatch and let them take it from there.
 
Well in case you want to also keep track of proposals after they have passed you should work with http://dashwatch.org. That's basically what they are doing. It would probably be easiest if you handle stuff before the proposal is accepted and then if that happens just provide all the necessary info to dashwatch and let them take it from there.

Hey Singleton, thanks again for pointing us in the right direction. Dashwatch is great and something I hadn't seen before - just shows we have plenty to learn about the community still! We'll try and get in touch with the team behind that; maybe there's some way for our work to feed in to their pipeline and assist too, and vice versa. The last thing we want to do is duplicate efforts (more work for us that could be spent on more productive things!)
 
For those interested, @solarguy, @jeffh, @TheSingleton, @Arthyron, we have put together a few pages on the rubric proposal. Here's a link to a shareable google doc.

It includes a draft of our proposed categories, and how we break those down in to aspects and measures. It also has an illustrated example of how we expected the DashRemix page to present all the information, from the high-level 'at-a-glance' view down to an expanded and detailed summary.

We've only drilled in to one category with any detail, we really want to refine the areas with your help and MNO's feedback so this is just a draft to give you an idea of how the system works. We'll welcome feedback here and I'll be reaching out on Discord and Reddit to get some more eyeballs on it too.

Everything is subject to change - the end goal is to have a framework that is really helpful to MNOs and proposal owners alike!
 
That looks quite good and shows again just how professional you are working.

You will definitely have my vote.
 
Thanks for the overview (subject to change of course) of the evaluation rubric.

How about you apply the rubric to 2 or 3 examples to see what the results would look like? This could be current proposals or previous proposals or one or two of each.

I would especially be interested in seeing you apply it to a controversial proposal that was somewhat complex to assign value or ROI. I'm sure the community would be happy to recommend 2 or 3 that were hard to call. I would suggest two:

How to on-board 300,000 new users
https://www.dash.org/forum/threads/...0-new-users-in-one-geographic-location.30087/

and, the CryptoCon sponsorship prop
https://www.dash.org/forum/threads/...rld-crypto-con-sponsorship.30493/#post-173778


After all, the easy ones are, well....easy.

finest regards,
 
@JackDashRemix -- The sample rubric looks pretty good. You even anticipated some of my concerns later on in the document about when the rubric might not apply to certain projects or teams based on the info available, etc.

If a team feels that the system unfairly categorizes them, is there some sort of appeals process, and how is that arbitrated?
 
@JackDashRemix -- The sample rubric looks pretty good. You even anticipated some of my concerns later on in the document about when the rubric might not apply to certain projects or teams based on the info available, etc.

If a team feels that the system unfairly categorizes them, is there some sort of appeals process, and how is that arbitrated?

Thanks for the comments Arthyron, always helpful and we really appreciate your level of engagement.

We are having a bit of a discussion on the 'not rated' section internally, for example having a separate 'not applicable' rating if it didn't apply, and a 'not rated' if there just wasn't enough information given. At the same time we don't want to unnecessarily complicate things, so we might stick with 'not rated' and, like every other box, have it expandable so you can check the evidence / reasoning for not rating it.

Regarding contesting results, this is obviously an important function. Since we'll be evaluating proposals it's safe to say that owners will have issues with our evaluations, particularly as I think very few projects will score well in every area. That isn't to say that they aren't good proposals, just some ideas naturally fall down against certain measures, and certain measures naturally oppose each other (e.g. a highly innovative proposal would likely also be highly risky, and a low risk project typically is less innovative, etc).

We are still designing how that contestability function might work, and keen to hear ideas on this too. There's a few issues to work through:
  • How it's designed to fit in to the platform (e.g. do we include a form against every measure, or just a catch all comment box, or some other way). To begin with we might just have a support email and then put in a better system later once we know how that kind of interaction typically goes. Needless to say there will be 'a' solution at launch, even if it's imperfect.
  • Who gets to contest results (just proposal owners, or MNOs or other users as well?), and if it's just owners for example, how do we verify it's them contesting it etc.
  • How we publish contests against results and how we publish follow ups etc.
In an ideal world we will have all the info we need to make accurate results and owners will be agreeable to a fair evaluation, of course it won't always be the case so this is a pretty critical function.

My preference is that any contests and our responses are all public and transparent, just like the rubric framework and our evaluation work, so readers can see the whole trail of communications against an evaluation. Right now I'm not sure what level of engagement we can expect on the average proposal either; our work on CoinRemix for example we are really engaged with project teams to ensure the technical data is all correct, but I don't think we'll expect the same level here.

We are currently debating a reduction in the number of reviews we deliver as part of this proposal, just to work through the implementation of these kind of features and see how our ideas work and what workload these kind of things add to the project. We would probably cherry pick 10 projects for the 2nd month of this proposal made up of a broad representation of budget proposals to get some good data on how a typical month might look: e.g. a large expensive proposal, a small proposal, a proposal with a totally deficient amount of information against it that will require a lot of communication with the project team, a rush proposal submitted in the last days of a voting period etc etc.

@solarguy that last point kind of ties in to your comment too, honestly we would rather not deliver some example evaluations now. The reason why is that developing the rubric takes time and care to ensure it's good quality, this is what a big chunk of this proposal is for. By good quality I mean a framework that delivers really accurate results; each statement against each measure against each aspect in every category needs to be crafted to make sure it's not just a word soup, it's something that an evaluator, any one looking at it, can clearly understand how it correlates to the proposal information they are looking at and which verdict lines up against it.

If we rushed a couple out at this stage, pre-proposal, I would say that it won't be a good representation of the quality we hope to deliver at all, we won't have the platform in place to show it off properly, and we'd have to throw out a lot of that draft work once we alter the rubric to fit MNO's feedback too (a process we have started with the engagement here with people like yourself). We'd rather reduce the scope and funding of the proposal a bit to make it more palatable as a pilot project than half-heartedly deliver something before it's ready and give the wrong impression of the project. I guess what we are asking is the same as any proposal: trust us!

Cheers!
 
I would rather see a beta version before funding this as this seems not to have a tangible return to the network. Yes it means less work for mno but we are compensated decently by the network to review proposals. Also automated voting makes me nervous.

What is your experience with dash?

Why doesn’t Coinremix.com accept crypto as payment?

How many users do you have on Coinremix.com it has only been around since September?
How has ur rubric played out on ico’s?
 
Hi Jack! Any thoughts on the ability to delegate votes to others should MNO's want to do so? I spoke to why I think this is necessary in this video here at the 47 minute mark:
 
I would rather see a beta version before funding this as this seems not to have a tangible return to the network. Yes it means less work for mno but we are compensated decently by the network to review proposals.

Hi Ftoole - it's freakin awesome that you view MNO rewards as compensation for this service. However, I think most (and certainly fairly IMO) see MNO rewards as a return on the significant capital tied up in running an MN. The amount of work that most people are willing to spend sorting through the dozens of proposals each month is greatly less than the amount it takes to make fully informed votes, on the whole. If I were an owner of just one node, (or even a few) I would likely feel the same way.

I think whatever we can do to increase MNO engagement is critical to the future of Dash as in recent months, I think the "rational disengagement" we've been experiencing is starting to manifest a round of growing pains. (I'm not saying this proposal is or isn't the way to do so, I'm just advocating for the discussion of this issue.) And in full disclosure, I've gained from those efforts as a former (and potentially likely current) proposal owner who sought to facilitate that process.

Thanks for doing your part!
 
Hi @Ftoole thanks for your questions.

We are looking at reducing the scope and budget of this proposal so it's more like a pilot project and more palatable for treasury funding. Automated voting is not part of this proposal, likely the next, and a discussion worth having. There are safeguards we can put in place that should mitigate concerns, e.g. put a check that MNOs have reviewed their votes before confirming. We are already thinking about security and centralisation concerns by writing evaluation outcomes to the blockchain to ensure no one can manipulate the system.

I think, and I would like to hear your opinion, that we can really add a lot of value to the network. Our initial goal is just to remove painpoints for MNOs. We want to increase their engagement with the voting system by allowing them to quickly evaluate all the information on the table. The rubric will draw out additional project details which will help people make informed decisions. We want to create guides for proposal owners to make sure they are giving MNOs the right amount of detail and information, therefore giving them the best chance to be fairly evaluated by the community and succeed if the project stands up. Ideally this will all result in better proposals and more informed votes on proposals; these are 'upstream' benefits to the Dash ecosystem which result in better long-term outcomes for the project (since, ideally, better projects are moving forward and working for Dash's interests out in the world).

We were introduced to Dash by the Kuvacash team. Personally, I have been interested and involved in crypto for the last couple of years, mostly through in the Ethereum space however. Our team has a blockchain developer (Ben) and a very experienced silicon valley product designer (Devin). All of us are crypto enthusiasts, and since learning about Dash have really come around to the DAO system and its obvious benefits for the protocol. Aspirational projects like Kuva are good examples of that.

CoinRemix will accept crypto at some point; for now it's more of an accounting issue (we are incorporated in Australia and crypto tax isn't well sorted out yet!). We pivoted from software development (crypto portfolios etc) last year to the evaluation system now in place, so we are still working through some of these things, our focus is mainly on getting content out at the moment.

Our method for CoinRemix is different to DashRemix; DashRemix is about establishing an analyst agnostic rubric, so that any team (not just DashRemix) can use these tools and contribute to the community. CoinRemix, on the other hand, is our expert opinion on ICOs that people pay for; it has some rubric elements (more technical, objective evaluation on team size, technical implementation, community metrics etc) but really we are deep-diving in the technical aspects of blockchain technologies on ICOs. Most reviews are focused on 'will this give me good returns after the ICO?' but we focus on "are the fundamentals of this project such as the technical foundation, market feasibility etc solid?" which is more of a longterm view on the viability of crypto out in the world. You can view our latest review here, it is currently outside of the paywall. Just to be clear, all DashRemix framework and content will be open source and free to access.

Gday @craigums, thanks a lot for your input and for linking me to your video. I watched through that section a couple of times and it is a very interesting idea. Increasing engagement with MNOs is really at the core of what we're doing and your discussion there; I think any measurable increase will result in better outcomes for Dash. There are multiple solutions to the issue; ours came about by asking 'How can we make evaluating proposals a painless experience for MNOs when there are so many proposals being posted and a never-ending debate around a lot of them?'

Our answer is to provide a framework to funnel all that information in one place, in a way that evaluates all the information fairly and gives a quick snapshot with a detailed dive available for any MNOs who wish to be more engaged. We believe an informed vote is a better outcome than no vote at all in this case, especially when a relatively small number of MNOs are responsible for the strategic direction of this community at the moment. Regarding delegation of votes, definitely possible and it's a good idea too. There's a few ways to do it; for us this will likely come under another proposal along with the APIs.

As someone on your youtube comments said, it's probably possible to do it via dashcentral voting which would be fine by us. A method integrated with DashRemix could be to have a 'curator' system, e.g. MNOs which use our rubric framework to do their own evaluations. Other MNOs could follow these MNOs and choose to vote based on their evaluations. This requires a bit of trust that they are using the rubric properly and and raise some more issues to think about around conflicts of interest, best and worst case scenarios etc, but there is probably an outcome there which works well enough. If you're a gamer it may look similar (but in a more structured manner) to the Steam curators functionality in some way.

Thanks and regards,
Jack @ DR
 
Hey Singleton, thanks for having a read. Actually we were introduced to DASH by some pretty recognisable names in the community and came up with the idea after a few discussions with them, precisely about the number of proposals that come up each voting cycle. We've let them know our proposal is up for discussion and I'll let them give their thoughts, good or bad!

I can show you some of the work we've done at CoinRemix, ultimately it works to a slower drumbeat than DashRemix will have to, and on a different model, but the level of analysis should be heartening. I also aim to get a video up before we submit, which goes through a bit of that process too.

Regarding your latest question, right now we have a rough roadmap in mind which are additional proposals. Phase 1 is as you see, phase 2 is detailed there too (mostly around the API). Phase 3 we have some ideas for additions to DashRemix, but I expect our ideas will change by then. We will have some ongoing running costs. It may even be that we hire additional people from the community (without conflicts of interest) to continue evaluating proposals against the rubric, just to make sure we get every proposal that comes out. I see this running a few ways:
  1. We continue adding value with new proposals that contain new features. This might just be a few proposals, or we might uncover a whole bunch of work that should get done and actually helps MNOs.
  2. At some point the features are pretty much locked (but open source if other proposals want to build on them) and we just ask for a lower amount of DASH as a running cost of completing the evaluation.
  3. If at any point it's not valuable to the community, our proposals stop getting through and it dies a natural death. Hopefully we don't reach this point and DashRemix is a really useful tool for MNOs and proposal owners in to the future.
Thanks :)
Lets see what phases 1, 2 and 3 look like before we fund it all right? Makes sense to have a plan before spending the money in my opinion.
 
@satoshilives Totally agreed Satoshi. This proposal is just for the first part and I think we may reduce some of the scope and budget as mentioned in earlier comments.

We just wanted to make it clear that there are ideas for further development if this project is successful and useful for the community. Those ideas will probably change as we work on the project anyway.
 
@JackDashRemix
Hey, are you still working on this? I wouldn't recommend submitting this cycle but hopefully, with the increasing price next cycle there will be more space.
 
Back
Top