PortfolioAPI is a RESTful service designed to seamlessly integrate with MongoDB, enabling efficient CRUD operations across various collections. It serves as the backbone for user interfaces that display comprehensive portfolio information, including work experiences and projects, tailored for individuals looking to showcase their professional journey. Unique to our API is the integration of a sophisticated Chatbot, powered by OpenAI's latest GPT-4o model, which acts as a personal assistant, enriching user interaction with instant, intelligent responses to inquiries. Developed with django, pymongo, django channels, djnagorestframework, pydantic, langchain, this API offers an unparalleled blend of functionality and user engagement.
Run the following commands to install the appropriate packages:
This section provides a detailed overview of the data design, including the database schema, entity relationships, and security measures implemented to protect the data.
The Portfolio API utilizes two main types of databases to store and manage data efficiently: MongoDB Atlas, a NoSQL database for storing portfolio information, and Pinecone Vector Database, a specialized database for managing vector data to enhance chatbot functionalities.
id
: Unique identifier for the user.first_Name
: The person’s first name.last_Name
: The person’s last name.date_of_birth
: The date the person was born.hobbies
: A list of hobbies the person engages in.short_bio
: A short biography of the person.bio
: A full biography of the person.country_of_birth
: The country where the person was born.country_of_residence
: The country where the person resides.email_address
: The person’s email address.linkedIn_url
: The URL to the person’s LinkedIn profile.gitHub)_url
: The URL to the person’s GitHub profile.id
: Unique identifier for the project.user_id
: Unique identifier for the user the project belongs to.description
: Detailed description of the project.skills
: Technologies used in the project.images
: URLs to images related to the project.url
: Link to the project, if available.id
: Unique identifier for the job experience.user_id
: Unique identifier for the user the experience belongs to.project_ids
: A list of project ids.company
: The company's name.job_title
: The title of the job.job_description
: Description of the job role.start_date
: The start date of the position.end_date
: The end date of the position, if applicable.id
: Unique identifier for the achievement.user_id
: Unique identifier for the user the achievement belongs to.certificates
: Certificates earned by the individual.degrees
: Academic degrees earned by the individual.text
: str
text
: inherited from Message class. This field will hold the user's question.
text
: inherited from Message class. This field will hold the chatbot answer to the user's question.question
: A Question object. Represents the user's question.chat_history
: Langchain ChatMessageHistory object. This field will hold the history of the conversation.completed
: A bool object. This field represents if the answered is completed.from_chunks(cls, chunks: List[str], question: Question, chat_history: ChatMessageHistory) -> Answer
: Creates an answer object from chunks of text.serialize(self) -> Dict[str, Any]
: Converts the Answer object to a Dictionary.
serialize(self) -> Dict[str, Any]
: Converts the RelevantDoc object to a Dictionary.
vector_store
: Langchain MongoDBAtlasVectorSearch object. This will be used to extract the appropriate relevant docs from Atlas.k
: int. This property will control the amount of relevant documents that will be extracted from Atlas.call(self, question: Question) -> List[RelevantDoc]
: method used to call the service class which will extract the relevant docs from Atlas.
llm
: Langchain ChatOpenAI Object. This will be used to call the appropriate OpenAI llm.call(self, question: Question, chat_history_summary: str) -> Question
: method used to call the service class which will reconstruct the user's question taking into account the conversation's history.
llm
: Langchain ChatOpenAI Object. This will be used to call the appropriate OpenAI llm.call(self, chat_history: ChatMessageHistory) -> str
: method used to call the service class which will summarize the conversation.
llm
: Langchain ChatOpenAI Object. This will be used to call the appropriate OpenAI llm.call(self, question: Question, chat_history_summary: str, relevant_docs: List[RelevantDoc]) -> str
: method used to call the service class which stream the chatbot response.