Closed frouioui closed 2 years ago
+1
On Fri, Nov 12, 2021 at 11:15 AM FlorentP @.***> wrote:
Assigned #195 https://github.com/cncf/cluster/issues/195 to @caniszczyk https://github.com/caniszczyk.
— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/cncf/cluster/issues/195#event-5610992277, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAPSIPGBVJAKATUDFB223TULVDSXANCNFSM5H5JSFPA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
-- Cheers,
Chris Aniszczyk https://aniszczyk.org
@frouioui -
As you are setting up test infrastructure for this, please consider using our Gen3 configs (m3, c3 etc) and deploying to the new Equinix Metal IBX facilities instead of the legacy Packet facilities.
As an example see https://metal.equinix.com/product/servers/m3-large/ for the m3.large (AMD Rome) specs - that's our current generation workhorse machine with good availability across a lot of locations.
@frouioui you already should have access through #176, any issues with it?
@frouioui -
As you are setting up test infrastructure for this, please consider using our Gen3 configs (m3, c3 etc) and deploying to the new Equinix Metal IBX facilities instead of the legacy Packet facilities.
As an example see https://metal.equinix.com/product/servers/m3-large/ for the m3.large (AMD Rome) specs - that's our current generation workhorse machine with good availability across a lot of locations.
Hello @vielmetti, thank you for the resources. Our team had planned to migrate to those non-legacy servers as part of 2021's Q4.
@frouioui you already should have access through #176, any issues with it?
@idvoretskyi, I do have access, though I would like to give @deepthi (Deepthi Sigireddi) access to the project too.
@frouioui got it, my apologies. Invited now!
No problem @idvoretskyi, thank you for the invitation.
I now have access to the project. This issue can be closed, thank you!
@deepthi great!
First and Last Name
Deepthi Sigireddi
@deepthi
Email
deepthi@planetscale.com
Company/Organization
PlanetScale
Job Title
Software Engineer
Project Title (i.e., summary of what do you want to do, not what is the name of the open source project you're working with)
Arewefastyet - Nightly Performance Testing of Vitess
Briefly describe the project (i.e., what is the detail of what you're planning to do with these servers?)
Equinix Metal project name:
Vitess-perf-testing
Arewefastyet is Vitess' automated performance testing tool, it serves as a nightly CI test of Vitess reporting performance over time. We run new tests on Vitess' master every night and report the results.
Is the code that you’re going to run 100% open source? If so, what is the URL or URLs where it is located? What is your association with that project?
Yes, the project codebase is https://github.com/vitessio/arewefastyet, and Vitess' is https://github.com/vitessio/vitess.
What kind of machines and how many do you expect to use (see: https://metal.equinix.com/product/servers/)?
We currently use m2.xlarge and c2.medium instances, we also spin up lighter instances such as t1.xsmall and c1.xsmall for testing purposes.
What operating system and networking are you planning to use?
Currently using CentOS, though we are migrating our builds to Ubuntu.
Any other relevant details we should know about?
Link to previous issue #176.