Closed vr-varad closed 2 months ago
Dear @kgryte, @Planeshifter, and @Pranavchiku, I kindly request your review of this at your earliest convenience. Your valuable feedback would be greatly appreciated. Thank you.
@vr-varad Thanks for filing this draft proposal. A few comments:
t.equal
, etc)."Refactor existing tests for improved readability" involves restructuring the test suite to enhance clarity and maintainability. I'll prioritize critical tests and gradually refactor them, preserving functionality. While I aim not to make any changes, I'll consult maintainers if any such situation arises. Given the project's scale, I'll assess feasibility and prioritize tasks, considering impact and resources, to ensure efficient execution.
For running tests simultaneously, I suggest starting with a per-file approach. This means tests in the same file can run together, which can speed things up without making it too complicated. But I'm open to other ideas if needed, depending on how the project works and how fast we need tests to run. I'll also consider adding this feature later, but for now, I'll focus on the main plan and adjust as needed.
@kgryte, I've implemented the suggested changes and addressed your inquiries in the proposal. Please review it at your convenience. Let me know if there's anything else you need or if you encounter any issues.
@kgryte @Pranavchiku Any more suggestions or changes I could make to improve my proposal.
Hi @vr-varad, thanks for sharing your draft proposal!
I still have some of the questions that @kgryte mentioned in his previous comment, specially because you are naming a lot of "refactoring" tasks without being specific as for what changes are going to be made and to what packages. I also have these questions,
Building on Stephannie's comments,
tape
not currently handle multiple failures? Or are you proposing something different?In general, it would be good if you can flesh out your proposed tasks to make things more concrete.
Hi @vr-varad, thanks for sharing your draft proposal!
I still have some of the questions that @kgryte mentioned in his previous comment, specially because you are naming a lot of "refactoring" tasks without being specific as for what changes are going to be made and to what packages. I also have these questions,
- How are you going to do the test reporting?
- What do you mean by refactor test suite structure for conciseness in week 7?
- What do you mean by sample application?
Certainly, let me address those questions:
Test Reporting:
Refactor Test Suite Structure for Conciseness in Week 7: It just means rearranging the structure of test files and modules for better clarity and just to enhance readability.
Sample Application:
Building on Stephannie's comments,
- What is meant by "Matcher"?
- We've already migrated from Istanbul to C8, so that can be removed from your timeline.
- What is meant by "handling multiple failures per test"? Does
tape
not currently handle multiple failures? Or are you proposing something different?- We're not interested in implementing hooks for setup and teardown. We don't use such "hooks" anywhere in our existing tests, so it is not clear why we'd need to implement them in our in-house test runner.
- What's meant by "error handling mechanisms"? What, specifically, do you have in mind here?
In general, it would be good if you can flesh out your proposed tasks to make things more concrete.
Certainly, let's address each of these questions:
What is meant by "Matcher"? Matcher is like a smart assistant which is used the validate the truthy values. It is used to make test more clearer and easier. The main reason why I am adding matchers is because of plain english and it can encapsulte complex logic inside itself and has a clear failure description.
t.ok([1,2,3],includes(2),"Array contains 2");
What is meant by "handling multiple failures per test"?
Why implement hooks for setup and teardown if they're not used in existing tests?
What's meant by "error handling mechanisms"?
@vr-varad Please refrain from replying with LLM generated answers.
@kgryte Sorry but I have my notes and I am using LLM to frame sentences
coz i don't some times make sense in my notes I have my own notes which I am refering
Sorry for that but not any answer is LLM generated its all framed sentences from my notes.
@steff456 @kgryte After checking the packages and modules that use tape, I found out that there's no need to make any big changes (like restructuring or rewriting code). Everything seems fine as it is, so we won't be doing any refactoring.
Sorry for that but not any answer is LLM generated its all framed sentences from my notes.
Understood. While we recognize that LLMs will continue to play a greater role in development, it is important that we hear your voice. In this particular case, LLM inspired answers did not answer our questions, especially as to how, e.g., "Matchers" applied to stdlib and why these had their own dedicated weeks (Weeks 10-11), especially when the set of assertions we use throughout stdlib is relatively limited (e.g., equal
, notEqual
, ok
, notOk
, pass
, fail
, deepEqual
). These can be implemented in about 20 minutes total.
In short, leverage LLMs, but do so in a way which enhances your understanding. If we wanted LLM answers, we could have just used an LLM ourselves. Your task is to demonstrate that we should select you over an LLM.
@kgryte I could understand what are u saying, i should not be using it to answer the questions but to take help from that espicially in the case of these projects. Matchers are not used readily in stdlib but I am adding feature of matcher for future-proofing that's all. In that case, I am thinking of combining weeks 7-8-9-10 as 2 weeks and making a sample implementation and implementing matches and enhancing the test-runner's code structure and performance.
@kgryte @steff456 what do u think? It would be helpful if I could get your reviews on this decision and I would be grateful for that,
In general, I am against implementing matchers in the test runner. That is simply not a convention we use, and we're not planning on migrating. This test runner should target specifically how we write tests. There may be some innovation, but this is not an area where we're interested in innovating.
As an example of where we are interested in innovating is in supporting something like
t.throws( foo( value ), 'throws a %s when provided %s', 'TypeError', JSON.stringify( value ) );
Notice the support of string interpolation. Compare that to how we currently write similar tests in the project.
In that case, I could shift from implementing matchers to adding some implementation like this I have some like
t.strictEqual( v, out, 'returns expected value' );
t.deepEqual( v, expected, 'returns expected value' );
t.strictEqual(v, out);
t.deepEqual(v, expected);
as it's heavily used in the module which could have third parameter as optional and default as "'returns expected value"
@kgryte How about adding a custom assertion method where u can create ur own assertion methods and will make test more independent. Like
assert.isblah=(val,desc='should be blah' ) => {
Return value is blah
}
test('should be blah',(t)=>{
t.isblah('x')
})
@kgryte what are ur thoughts on this?
Nope. :) Also not something we do in the project and, again, would entail a significant refactoring.
Since you are seeking other innovations, another would be t.approxEqual
for testing approximate equality. See how we currently test approximate equality in the project. It would be nice to cut down on some of the boilerplate to do so. However, this is not as straightforward as it might appear.
In that case, I could shift from implementing matchers to adding some implementation like this I have some like
t.strictEqual( v, out, 'returns expected value' );
t.deepEqual( v, expected, 'returns expected value' );
t.strictEqual(v, out);
t.deepEqual(v, expected);
as it's heavily used in the module which could have a third parameter as optional and default as "'returns expected value"
t.throws( badValue( values[i] ), TypeError, 'throws an error when provided '+values[I] );
to
t.throws( badValue( values[i] ), TypeError, values[I] );
const expected = 10.0;
const actual = 10.05;
const epsilon = 0.1; // Allowable difference
t.approxEqual(actual, expected, epsilon);
something like that(or not checking type).\
5. t.comment(message) Print a message without breaking the output.
@kgryte I could work on these methods (I would be updating the above list)
Not sure why you'd need to create a separate website. We already have API docs published on the project website.
No on matrixEquals
. That is not necessary and not common.
@kgryte In the proposal I have added few new task
Discovering test files which will focus on -
Supporting Asynchronous Tests where I will
Tagging Tests (want ur review) It will give a mechanism of slicing test suites in ways that allow them to be run differently depending on the run context
Skipping tests The idea is that the user can avoid a test being run (perhaps because they haven’t written it yet) by renaming t to t.skip. The same can be done for test. , which becomes test.skip.
Refactoring test assertion methods and optimizing them.(will be searching for new optimal assertions till then)
@kgryte Any suggestions or corrections??
t.skip
. We don't use this pattern, preferring instead to provide an options argument {'skip': true}
. See our tests for native add-ons.===
is already optimized.
- Probably not needed.
t.skip
. We don't use this pattern, preferring instead to provide an options argument{'skip': true}
. See our tests for native add-ons.- Not sure that the assertions will require much optimizing.
===
is already optimized.
adding {skip: true} would be part of tagging test.
apart from all the above the things that we could done is
Beyond the scope for this project.
@kgryte I have made the changes in the proposal and am about to submit it. Thanks for your guidance and patience. Do you have any suggestions or an add on?
No. You should be good to submit. Good luck!
@kgryte @steff456 @Planeshifter @Pranavchiku I've completed my final proposal, incorporating additional sections I felt were necessary, such as Deliverables and Implementation Plan, Related Pre-Proposal Work, and Post GSoC Plans. The rest of the content adheres to the format provided in the issue section. I've ensured that all aspects are well-explained and structured, aiming for clarity and coherence throughout the proposal. Please review it and let me know if any further adjustments are needed. Thank You Eveyone.
Full name
Varad Gupta
University status
Yes
University name
Indian Institute of Information Technology, Ranchi
University program
Btech in Computer Science and Technolody with spez in AI and DS
Expected graduation
2026
Short biography
🚀 Hi, I'm Varad, a Full Stack Developer specializing in backend development, currently studying computer science at IIIT Ranchi with a focus on AI and DS. My expertise lies in crafting robust solutions using the MERN stack—MongoDB, Express.js, React, and Node.js. Additionally, I'm able to work proficiently in Python and passionate about integrating AI into projects.
⚙️ I'm well-versed in backend technologies like Docker and Kubernetes for scalability, and I have experience working with GraphQL for efficient data management.
đź’ˇ I'm excited about the opportunity to collaborate and create innovative solutions that push the boundaries of technology.
Timezone
India Standard Time, Time zone in India (GMT+5:30)
Contact details
varadgupta21@gmail.com
Platform
Linux
Editor
🚀 Visual Studio Code (VS Code) stands out as my editor of choice for its seamless blend of functionality and efficiency. With its sleek interface and robust features, it enhances my workflow as a Full Stack Developer specializing in backend development. From its support for the MERN stack to seamless integration with Git, VS Code streamlines coding and collaboration. Its extensive debugging capabilities ensure swift issue resolution, while its customizable nature allows for tailored development environments. In essence, VS Code's versatility and performance make it an invaluable tool for navigating the complexities of modern software development with precision and ease.
Programming experience
🚀 My coding journey began with HTML and Python, evolving through C and C++, until I immersed myself in the dynamic world of the MERN stack—MongoDB, Express.js, React, and Node.js. It was within backend development that I found my true passion, sculpting scalable solutions and dynamic APIs with Node.js as my cornerstone. Through challenges and triumphs, my commitment to backend craftsmanship only deepened. Now armed with a wealth of experience and an unwavering love for backend intricacies, I'm poised to navigate the ever-evolving tech landscape with confidence and innovation, driven by a relentless pursuit of excellence.
Project Highlights:
Twitter Backend System: A robust architecture supporting tweet posting, image uploads, likes, comments, and hashtags. Efficiently managing user profiles, authentication, and engagement features, it ensures a seamless and secure experience for a dynamic social media platform.
Airplane Booking System: Leveraging MongoDB, Express, and Node.js for data management. Robust authentication secures flight and passenger data, ensuring optimal performance and scalability.
JavaScript experience
🚀 My journey with JavaScript encompasses mastering foundational concepts like arrays and functions, advancing to topics such as coercion, OOP, and async programming. From creating simple games like chess to building backend applications, JavaScript's versatility has been my canvas. Crafting games has honed my problem-solving skills, while backend development with Node.js and Express.js has enabled me to build RESTful APIs and real-time applications seamlessly. Transitioning between frontend and backend, I've relished the challenge of architecting scalable solutions. JavaScript's power in both realms continues to inspire me, pushing the boundaries of what's achievable in programming.
Node.js experience
🚀 Embarking on my Node.js journey, I've witnessed its evolution from a server-side runtime to a cornerstone of modern backend development. With Node.js, I've mastered the art of crafting robust, scalable applications, seamlessly integrating with frontend technologies to deliver captivating user experiences. Asynchronous programming challenges have become exhilarating opportunities for optimization, while building RESTful APIs and real-time applications has honed my skills in architecting elegant solutions with frameworks like Express.js. From small-scale projects to enterprise-level systems, Node.js has consistently fueled my passion for pushing the boundaries of backend development. Its versatility, performance, and reliability inspire me as I navigate the dynamic landscape of technology.
C/Fortran experience
🚀 Starting with limited experience in C, I delved into basic data structures like arrays and strings, gradually mastering intricate ones such as linked lists, trees, and graphs. Proficient in dynamic memory allocation, I've engineered efficient solutions, navigating data manipulation and optimization challenges with precision.
Interest in stdlib
I'm intrigued by stdlib's vision to transform numerical computation online. Its unique combination of JavaScript and C, along with a modular structure, resonates well with my expertise. I'm impressed by its dedication to quality, reflected in meticulous testing and detailed documentation. With stdlib, I envision a future where complex computations are simple and accessible to everyone. Joining this community means shaping that future together.
Version control
Yes
Contributions to stdlib
PR Merged: https://github.com/stdlib-js/stdlib/pulls?q=is%3Apr+vr-varad+is%3Amerged PR Open: https://github.com/stdlib-js/stdlib/pulls?q=is%3Apr+vr-varad+is%3Aopen Issue currently working on: https://github.com/stdlib-js/stdlib/issues/1517
Goals
Develop an in-house test runner for stdlib, optimizing testing efficiency to stdlib/bench/harness standards. This migration from tape streamlines testing, ensuring uniformity across unit tests and enhancing overall integrity.
Testing Approach:
Unit testing isolates code units for rigorous examination. A robust test runner, akin to stdlib/bench/harness, manages test suite loading, unit execution, result recording, and report generation, ensuring comprehensive coverage.
Advantages:
Unit testing accelerates test runs, bolsters test independence, and elevates consistency in outcomes. By targeting specific code units, developers fortify code reliability and maintainability.
Implementation:
The in-house test runner handles test suite and unit loading meticulously, aligning closely with stdlib/bench/harness standards. This approach optimizes testing environments, seamlessly integrating with stdlib's practices, and enhancing the testing landscape for future development.
Why this project?
Expertise Alignment: This project resonates with my proficiency in JavaScript, including Node.js, and C, providing an avenue to apply my skills effectively.
Challenge and Innovation: Migrating from tape to a custom test runner, particularly in a Node.js environment, presents a stimulating technical challenge, driving innovation in testing frameworks.
Impactful Contribution: Active involvement in this project allows me to significantly enhance stdlib's testing processes, promoting standardized practices within the developer community, especially in Node.js development circles.
Professional Development: Engaging in this project facilitates my growth as a software engineer, offering valuable experience in project management, collaboration, and problem-solving, particularly within the Node.js ecosystem.
In summary, selecting this project aligns with my expertise and aspirations, offering a compelling opportunity for impactful contributions and professional advancement in the realm of Node.js development and beyond.
Qualifications
In executing this proposal, I possess the technical acumen requisite for developing an in-house test runner for stdlib. Proficient in JavaScript, encompassing Node.js, and C, I navigate testing frameworks with fluency. My adept project management skills ensure agile delivery, while my astute problem-solving aptitude enables effective resolution of technical intricacies. Despite lacking formal qualifications in testing methodologies or statistics, my dedication to continual learning and adaptability equip me to contribute meaningfully to the project's success. In summary, my technical prowess, project management acumen, and problem-solving proficiency render me well-suited to propel the project forward and achieve exemplary outcomes.
Prior art
Before initiating the development of a custom test runner for stdlib, it is prudent to explore prior endeavors in the software development domain. Various projects have already pursued similar objectives, either by creating their own test runners or adopting alternative testing frameworks. For instance, notable examples such as Jest in the JavaScript ecosystem exemplify how custom test runners can streamline testing processes. Moreover, scholarly articles, blog posts, and community forums serve as repositories of valuable insights and best practices. By studying these resources, we can glean pertinent information to inform the design and implementation of the stdlib test runner, ensuring its efficacy and alignment with established industry standards.
Commitment
I am fully committed to dedicating 45-50 hours per week to the project before, during, and after the Google Summer of Code program. With unwavering dedication and focus, I will prioritize project milestones and deliverables to ensure its successful completion. I do not have any conflicting commitments such as vacations, other jobs, or exams that would impede my ability to fully devote myself to the project during the program period. This commitment extends beyond the program duration, as I am eager to continue contributing to the project's success in the long term.
Schedule
Assuming a 12 week schedule,
Community Bonding Period: Set up local development environment with required tools such as Node.js and Git. Study existing folder structure and codebase to understand the project architecture. Discuss and refine project goals and requirements with mentors. Research and plan folder structure improvements for better organization.
Week 1: Initialize NPM project and install dependencies. Configure project settings in package.json. Develop core test runner entrypoint for test execution and reporting.
Week 2: Ensure package availability and local linking. Integrate test runner with a sample application. Implement error handling mechanisms for accurate test reporting.
Week 3: Enhance test runner for parallel test execution. Validate functionality with complex test scenarios.
Week 4: Implement 't' function for defining test cases. Define standard test assertions for descriptive test names. (equal, deep-equal, end, pass, skip, ok, set-timeout, clear-timeout, exit, run. not-equal, not-deep-equal, not-ok, etc)
Week 5: Develop mechanisms for managing test contexts. Organize and describe test cases using 't' and nested 'test' blocks. Implement error handling for exceptions.
Week 6: (midterm) Integrate CI/CD pipelines for automated testing. Enhance test output formatting and error reporting. Support nested 'test' blocks for modular organization.
Week 7: Implement hooks for setup and teardown. Update sample application for framework enhancements. Discovering testing files.
Week 8: Supporting Asynchronous Tests
Week 9: Handle multiple failures per test. Test the solution for clarity and correctness.
Week 10: Test Tagging. Skipping Tests.
Week 11: Refactoring test assertion methods and optimizing them.
Week 12: Integrate formatter with the Runner.
Final Week: Explore development workflows and test runners.
Notes:
Sample Test Runner Would look like.
Related issues
No response
Checklist
[RFC]:
and succinctly describes your proposal.