eli64s / readme-ai

README file generator, powered by AI.
https://eli64s.github.io/readme-ai/
MIT License
1.58k stars 167 forks source link

Unable to generate the readme file #46

Open mattpsvreis opened 1 year ago

mattpsvreis commented 1 year ago

Whenever I try using this via Docker, it says it works and stuff, but the file never generates. It only generates a broken-named folder without anything in it.

This is the output of the CLI:

INFO     Total files: 14
WARNING  Ignoring file: package-lock.json
WARNING  Ignoring file: .gitignore
WARNING  Ignoring file: package.json
WARNING  Ignoring file: .eslintrc.json
INFO
Processing prompt: src/routes/getAllPrompts.ts
Response: This code defines a route to get all prompts using the Fastify framework. It retrieves a list of prompts from the Prisma ORM and returns it as a response.INFO
Processing prompt: src/server.ts
Response: This code sets up a HTTP server with Fastify framework and enables CORS. It also registers multiple routes for handling prompt retrieval, video upload, transcription creation, and AI completion generation. The server listens on port 3333.
INFO
Processing prompt: src/routes/uploadVideo.ts
Response: This code provides an endpoint for uploading MP3 video files. It uses Fastify and @fastify/multipart for file handling, prisma for database operations, and node modules for file manipulation and UUID generation. The code validates the file type and size, saves the file to a temporary directory, and creates a corresponding entry in the database. Maximum file size limit is set to 500MB.
INFO
Processing prompt: src/routes/createTranscription.ts
Response: This code defines a route for creating transcriptions of videos in a Fastify server. It uses Zod for validation, Prisma for database access, and OpenAI's model to generate transcriptions from audio files. Transcriptions are stored in the database and returned as a response. Total characters: 323.
INFO
Processing prompt: routes.http
Response: The code provides four main functionalities: 1.'get-prompts' retrieves a list of prompts from a local server.2.'upload' allows users to upload a video file to the server.3.'create-transcription' generates a transcription for a specific video by providing a prompt.4.'generate-ai-completion' uses artificial intelligence to generate a concise summary of the video's transcription based on the given prompt.
INFO
Processing prompt: src/lib/openai.ts
Response: This code initializes and sets up the OpenAI client by importing the necessary packages and creating a new instance of the client using an API key provided through the environment variable.
INFO
Processing prompt: src/routes/generateAICompletion.ts
Response: This code defines an API route for generating AI completions. It validates the input, retrieves a video from a database, generates a prompt message, sends it to OpenAI's chat completions API, and streams the response to the client.
INFO
Processing prompt: src/lib/prisma.ts
Response: The code imports and initializes the Prisma client, which communicates with the database. It offers functions to interact with database tables, perform CRUD operations, and manage the data.
INFO
Processing prompt: prisma/seed.ts
Response: This code utilizes the Prisma ORM to create and manage prompts for YouTube videos. It deletes existing prompts and creates new ones with defined templates. Users can generate catchy video titles and concise descriptions with hashtags based on video transcriptions.
INFO
Processing prompt: prisma/schema.prisma
Response: The code sets up a generator client and sets the Prisma client as the provider. It also sets up a SQLite datasource using the specified URL. Two models are defined: Video with various fields and Prompt with title and template fields.
INFO
Processing prompt: 1
Response: Empowering AI with NLW precision!
INFO
Processing prompt: 2
Response: This project provides a server that enables users to upload and transcribe video files using artificial intelligence. It offers four main functionalities: retrieving prompts from a local server, uploading video files to the server, creating transcriptions based on the provided prompt, and generating concise summaries of video transcriptions using AI completion. The project's value proposition lies in automating the process of transcribing videos, saving time and effort for users while also providing comprehensive and accurate transcriptions.
INFO
Processing prompt: 3
Response: | Feature                | Description
                                                                              |
| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **⚙️ Architecture**     | The system follows a server-client architecture, with a HTTP server implemented using the Fastify framework. The modular setup enables easy exteensibility and separation of concerns.                                   |
| **📖 Documentation**    | The codebase lacks detailed documentation. Some files have short comments, but overall, there could be a better explanation of function usage, file structure, and external dependencies.                              |
| **🔗 Dependencies**     | The system relies on Fastify, Prisma, Zod, and OpenAI packages. Fastify is used to handle HTTP requests, Prisma provides database access, Zod adds input validation, and OpenAI powers the AI completion feature.           |
| **🧩 Modularity**       | Modularity is achieved by separating different functionalities into separate route files. Each route handles a specific part of the application and is enabled through the main server file.                                |
| **✔️ Testing**          | The codebase does not include any testing strategies or tools. Adding unit tests, integration tests, and end-to-end tests would greatly enhance  code reliability and ensure the proper functioning of the system.         |
| **⚡️ Performance**      | The system's performance can be improved by implementing request/response caching, compressing data during transmission, and optimizing databasse queries. Proper code profiling and benchmarking would provide more insights. |
| **🔐 Security**         | The system needs to enhance security measures. Validating user inputs more rigorously, safeguarding API keys/secrets, implementing secure data storage practices, and using HTTPS would improve the overall security posture.     |
| **🔀 Version Control**  | Git is used as the version control system for this project. The codebase takes advantage of Git's branch management, commit history, and code collaboration features to facilitate development and maintain code quality.        |
| **🔌 Integrations**     | The system integrates with the OpenAI API to power the AI completion feature. The integration is well handled with dedicated code in the `src/lib/openai.ts` file, abstracting API interactions from the rest of the codebase.     |
| **📶 Scalability**      | The system's scalability could be improved by exploring load balancing techniques, utilizing queues to offload tasks, and implementing caching mechanisms. Proper horizontal scaling and resource optimization strategies are crucial.   |
INFO     Successfully cloned https://github.com/mattpsvreis/nlw-ia-server to /tmp/tmp5j0vr2dq/repo.
WARNING  Exception creating repository tree structure: Error: file permissions of cloned repository must be set to 0o700.
INFO     Top language: Typescript (.ts)
INFO     TypeScript setup guide: ['npm install', 'npm run build && node dist/main.js', 'npm test']
INFO     README file generated at: /app/readme-ai.md
INFO     README-AI execution complete.

This is how it looks in the folders:

image

I don't know what I'm doing wrong.

eli64s commented 1 year ago

Hi @mattpsvreis! Do you have any more details on this?

Have you tried installing and running readme-ai using pip?

mattpsvreis commented 1 year ago

I'm unable to use pip in my current system.