Case Study
Prokur.io
/
2025
Give procurement leaders the ability to score fairly and accurately.
Check out Prokur.io

Image: Elements for the Scoring Criteria upgrade
My Role
Head of Design
Team
2 Designers
1 PM
1 Engineer
Technology
Figma
Problem Statements
As a buyer, I need to be able to create custom criteria that are weighted differently so we can accurately score responses.
As a vendor, I need to know what the scoring criteria are so that I can write my response appropriately.
As a buyer, I need to see a breakdown of the scores and comments so that I can understand why a judge gave a certain score.
Empathize
The need for this feature upgrade was born from CEO, Xavier Hughes, meetings with procurement experts while giving talks at different events.
Because we are a small, scrappy team, we rely on business development to help make connections to potential customer personas. We then will set up interviews with them so they can tell us about the pain points they come across during their procurement process.

Image: A user can edit their scoring criteria title, details, and criteria weight right from their RFP creation form.
Define
The scoring criteria upgrade needed to incorporate a few features to be successful:
Criteria must be able to be weighted
Most users would need no more than 5 criteria for their opportunities.
Judges need to be able to understand how scoring multiple criteria affects final outcomes
Understanding how scoring affects award submission is also extremely important to the bidding organization
Ideate
This feature actually came together much faster than other features we implemented. This was due to a few factors. One was the overwhelming desire from potential users for the feature. A second reason was because it fit so nicely into the current user experience. The foundation for the feature was strong and it just needed to be implemented in a way that was both feasible and impactful.
Our ideation phases tend to exist in two forms. The first phase is to have a meeting with the PM and Engineer team to understand technical limitations that may exist. We then go sketch and ideate for a couple days and come back with a batch of low to mid fidelity sketches. The product team will then meet one more time, make a few tweaks to the designs and then it is off to work on a detailed dev handoff.

Image: A user is previewing their scoring criteria details that will be sent out with the RFP.
Prototype
We incorporated working prototypes into most of our work for this scoring criteria upgrade. It was important to handoff prototypes because of the various error handling and interaction edge cases. The feature set, though seemingly straight forward, actually included a lot of logistical complexity for both buyer and vendor users.
For example, changing the weight of a criteria, vastly effects what happens during the judging, scoring, and awarding phases. It was important that at any moment it was always clear to any user why and how a score was to be determined. Prototypes did not have to be fully 100% accurate but more needed to represent the flow a user would go through when making changes to a scoring criteria.
Test and Deliver
Because of how quickly this feature came together, we were able to do acceptance testing and deliver it in just a couple sprints. It was a true team effort that stretched across, product, engineering, design, biz dev, and even to our subject matter expert network.