π§ββοΈ Wake The Dead
Try it now: WakeTheDead.ai
Perplexity AI π€ Link Reader
AI tool that makes reading, watching, and searching easier
Paste any link and:
- β‘ Get smart summaries from articles and videos. Skim through them in seconds.
- π Find what you actually need
- π€ See what others found useful
- π Works in your language
πΌοΈ Preview
Try Wake The Dead Now β
π Features
β‘ Smart Skimming
- AI-powered content summarization for articles, videos, and web pages
- Global caching system for instant access to previously processed content
- Multi-language support with dedicated caching per language and AI model
- YouTube video summarization with timestamp navigation
π Advanced RAG Search Engine
- Real-time web search integration
- Semantic similarity search using vector embeddings
- Community knowledge integration
- Auto-generated follow-up questions
- Rate limiting for API stability
π Community Knowledge Sharing
- Knowledge base building through user interactions
- Semantic caching for shared content
- Cross-referencing between related content
βοΈ Customization Options
- Multiple AI model support (including llama, gemma2, mixtral & Grok-beta)
- Language selection
- Progressive Web App (PWA) support
π Getting Started
Prerequisites
- Node.js (Latest LTS version)
- Next.js 13+
- An Ollama instance running locally (optional, for local embeddings)
- Various API keys (see Configuration section)
Installation
-
Clone the repository:
git clone https://github.com/datobhj/wakethedead.git
cd wakethedead
-
Install dependencies:
npm install
-
Set up environment variables:
Create a .env.local
file with the following variables:
# Required API Keys
GROQ_API_KEY=your_groq_api_key # Required: Main LLM API key
OPENAI_API_KEY=your_openai_api_key # Required: For chunk embeddings
UPSTASH_REDIS_REST_URL_1=your_upstash_url # Required: For article caching
UPSTASH_REDIS_REST_TOKEN_1=your_upstash_token # Required: For article caching
UPSTASH_REDIS_REST_URL_2=your_upstash_url_2 # Required: For chunk embeddings
UPSTASH_REDIS_REST_TOKEN_2=your_upstash_token_2 # Required: For chunk embeddings
Optional API Keys
OLLAMA_BASE_URL=http://localhost:11434/v1 # Optional: For local embeddings
SERPER_API=your_serper_api_key # Optional: Alternative search API
SEARCH_API_KEY=your_search_api_key # Optional: Alternative search API
XAI_API_KEY=your_xai_api_key # Optional: For xAI model access
4. Run the development server:
```bash
npm run dev
- Open http://localhost:3000 in your browser.
π οΈ Technology Stack
- Frontend: Next.js, React, Tailwind CSS
- AI/LLM:
- LLM support: OpenAI, Groq, Grok-beta
- Embeddings: OpenAI
- Database: Upstash Vector Database
- Caching: Semantic Cache with Upstash
- APIs:
- Search: Serper API
- Various AI model APIs
- Development Tools: TypeScript, ESLint
π Core Components
Content Summarization (app/summarizeVideo/route.ts
)
- Processes URLs and generates AI-powered summaries
- Supports both article and video content
- Implements caching and rate limiting
- Handles rate limiting and caching
RAG Search Engine (app/action.tsx
)
- Combines web search with vector similarity
- Processes and vectorizes content
- Generates relevant follow-up questions
- Handles rate limiting and caching
β Support
If you think this is cool, you can:
π Contact
- π¦ DM me on X: @DatoBHJ
- π§ Email me: datobhj@gmail.com
Built with π§ and β€οΈ by someone who drinks too much coffee