akhealth / RFP-ORCA-Dashboards

Draft RFP for the State of AK OCS to provide mobile access to initial assessment workers.
1 stars 0 forks source link

Update Evaluation Criteria to Match OCS Priorities #20

Open mheadd opened 6 years ago

mheadd commented 6 years ago

The criteria in Section 5 of the draft will need to be updated to reflect OCS priorities.

We should also discuss evaluation % for cost. Requirement is 40%; however, as you see below (20%) – we can request a waiver if we can justify why.

Any changes should also be reflected in the table of contents.

Also, the cost proposal exhibit may need to be updated based on this discussion. In the exhibit, the total cost figure used is as follows:

TOTAL PROJECT BUDGET (not to exceed $300,000)

This may need to be altered based on team discussions.

randyhart commented 6 years ago

If it were up to me, I would lower the cost weighting to 20%. The vendors know that the budget for this is $300K. If we put the cost weighting hire than 20% it might lead to a "buying in" situation where a vendor submits a low bid just to get an award. The tech is more important than the cost, in my opinion.

randyhart commented 6 years ago

@DanaPenner @susanjabal let me know if you want to discuss. Ultimately it's your call on this one.

DanaPenner commented 6 years ago

Discussed and approved to go ahead

susanjabal commented 6 years ago

@DanaPenner @randyhart @mheadd Per our discussion this morning - here are the eval weights I propose for consideration/adjustment: Technical Understanding & Approach - 15% Project Management & Approach - 5% User Interface/Experience Design - 10% Staffing Plan - 10% Similar Experience - 10% Verbal Presentation - 20% Cost - 20% AK Offerors Preference - 10%

mheadd commented 6 years ago

Looks good to me, @susanjabal

sztaylorakgov commented 6 years ago

Looks good.

waldoj commented 6 years ago

We had no idea of how to score verbal presentation on our last DHSS procurement, so if we're going to include that as a separate factor, I think we have to agree on what we're measuring with it.

With the interview, we were asking questions about their proposal, and if they told us something about their staffing plan that was a red flag, it only made sense that it would impact their staffing plan score, not their verbal presentation. Ultimately, we didn't know what we were scoring. How well-spoken they were? That resulted in our retrospective citing the problem, in which we concluded that it didn't make sense to have a separate score for the verbals.

So if we're going to score verbals separately, I think we'll want to figure out what it is that we're evaluating, if not all of the other things.

sztaylorakgov commented 6 years ago

You're right, @waldoj - the interviews were intended to be free form. So, we didn't go in with a strong scoring metric/approach. If we need something there, it might make sense to develop something based on things we'd like to see (or are concerned about) from the interview. Some things that come to mind after our DPA experience include:

Those are just some ideas.

randyhart commented 6 years ago

I think it might make sense to consider the two things in the context of a job interview. Our evaluation criteria are similar to the "qualifications" you put out with the job posting.

The initial written proposal is like the resume and/or application from all of the applicants. We score these based on the evaluation criteria to make an initial cut of who the most qualified applicants are that should be asked to the job interview.

When they come in for the job interview, we are still interested in the same evaluation criteria. The interview is just another forum to have some due diligence before making the selection. After the interview, the "points" from the evaluation of the written proposal should be updated to account for any new information that comes from the one-on-one interaction.

Does that analogy make sense?

sztaylorakgov commented 6 years ago

@randyhart I think the job interview analogy suggests that we have a single set of evaluation criteria and we score them initially, based on their written submission, and then a second time in a verbal interview.

I do like have a place to score distinctly for the interview, rather than just changing points in the pre-interview scoring categories. I think there was a sense in the DPA solicitation that maybe that's all we were doing, i.e., just going back and updating our scores in other areas. Then, I guess the question is: If we're only updating other scores, what is the point of separately scoring the interview?

We went into the DPA solicitation interviews with an approach that was open ended and probably focused on details that were covered in our other scoring categories. What I noticed is that where their answers really mattered, it was because they were able to move the line by either succeeding or failing - often times somewhat dramatically - at answering our questions. It wasn't just that they had a better or worse answer - in terms of the content - it was also that they were able to respond in the moment in a way that, for example, demonstrated team strength (or weakness), or their grounded experience (or lack thereof) in agile, etc.

I am advocating that there are other dimensions we should assess in the interview that don't fit in the written proposal. So, we should use the interview to improve our assessment of other scoring categories, and to assess those other "verbal only" dimensions. The ideas I put down above are a fair representation of what those other dimensions might be, though I was a little terse.

I also like giving 20% of the score explicitly to the interview because it conveys to the offeror that they have to be serious about that interview. I'm not too attached either way, though I really think the process we used for DPA served us reasonably well.

Sorry I ran on there a bit....trying to respond between meetings :-/

susanjabal commented 6 years ago

Good morning everyone - I am back from a week away and working on these RFP issues today.

All of the input provided above is valid and makes sense - we could go either way on evaluating the verbal presentation separately or integrating it. I suggest we keep the interview element separate, as it may encourage consistency in scoring by individuals on the team. I also suggest we offer a description in section 5.0: Evaluation Criteria, that defines the eval expectation of the verbal presentation, for us and for the vendor.

How's this: Section 5.06 Verbal Presentation The State will evaluate the offeror's ability to expand on, improve, and discuss the proposal sections described in RFP Section 4. The State will also evaluate the offeror's team dynamics, cohesion, and communication flow; and how compatible these are with the OCS team, the agile development process, and the QAP.

As suggested above - the resultant weights would be: Technical Understanding & Approach - 15% Project Management & Approach - 5% User Interface/Experience Design - 10% Staffing Plan - 10% Similar Experience - 10% Verbal Presentation - 20% Cost - 20% AK Offeror's Preference - 10%

thoughts?

mheadd commented 6 years ago

@susanjabal @randyhart Once we resolve issue #16, I think we can finalize this one as well.

mheadd commented 6 years ago

@sandralee19 Adding you to this issue.