Open ashishkarki opened 1 week ago
implementing the second option is definitely the way to go if you want to provide a sense of authenticity for the project. Even though you'll use fake PDFs for testing, having the system in place for manual verification gives you the flexibility to easily transition to real-world use later.
Restricting Uploads to PDFs:
Manual Cross-Check of Profile Data and ID Documents: Since you're aiming for a manual process (and avoiding the complexity of OCR/PDF scanning for now), the notary can manually review the profile details and uploaded ID documents. Here's how it could work:
Notary Workflow:
Profile Verification Status:
You won’t need to implement complex PDF scanning at this stage, so this approach keeps things manual but straightforward. Later, you can add OCR functionality to automate the cross-checking process.
Here’s a breakdown of how to implement this flow:
status: "Verified"
).User Schema:
isVerified
: Boolean flag indicating whether the user has passed the identity verification step.verificationStatus
: Enum (e.g., PENDING
, VERIFIED
, REJECTED
) to track verification progress.rejectionReason
: If a user's profile is rejected, provide a reason.Document Schema:
uploadedBy
: References the user who uploaded the document.status
: Enum (PENDING
, APPROVED
, REJECTED
) to track notarization status.rejectionReason
: Optional reason for rejecting a document.Frontend:
Backend:
User Registration:
Notary Cross-Checks Profile:
Document Upload and Notarization:
This approach keeps the project simple and manageable while introducing real-world-like identity verification. You're not overcomplicating things by adding OCR or automatic document checks for now, but you’re laying a solid foundation for scalability and adding those features later.
We think more about the storage and api infrastructure/patterns. A few things i was thinking: => the user's ID documents—where will we store them S3? if yes, how will we separate one user's documents with another's? is S3 perfect for this use case since I want to showcase the usage of lot of common AWS services like S3, lambda, SNS, etc, whichever is possible for this project? => When and how will we generate unique IDs for our users? => Should there be an email verification by sending OTP when creating the account, giving the platform more credibility? => the user name, dob, etc. profile info would be saved in Postgres (neon) with Prisma perhaps? or how else?
We think more about the storage and api infrastructure/patterns. A few things i was thinking: => the user's ID documents—where will we store them S3? if yes, how will we separate one user's documents with another's? is S3 perfect for this use case since I want to showcase the usage of lot of common AWS services like S3, lambda, SNS, etc, whichever is possible for this project? => When and how will we generate unique IDs for our users? => Should there be an email verification by sending OTP when creating the account, giving the platform more credibility? => the user name, dob, etc. profile info would be saved in Postgres (neon) with Prisma perhaps? or how else?
step back and think about the storage and API infrastructure patterns. Let's address your questions and suggest an approach that incorporates free, budget-friendly alternatives while still mimicking an AWS-like ecosystem.
Since you want to showcase common AWS services like S3, storing documents in S3 (or its alternative) is a good choice. However, since you're not using AWS directly due to budget constraints, here's a solution that aligns with your goals:
MinIO: Since you’ve already set up MinIO, it can serve as a perfect drop-in replacement for AWS S3. MinIO has the same API as S3, so you can interact with it using the same AWS SDK.
How to Organize User ID Documents in MinIO (or S3):
Each user can have their own unique directory structure within the bucket. For example:
`/users/{userID}/id-documents/{documentID}.pdf
/users/{userID}/notarized-documents/{documentID}.pdf`
UserID will be a unique identifier for the user, and this structure keeps documents well-organized and easy to retrieve.
You can also prefix the files with unique IDs or timestamps to prevent conflicts if multiple users upload similarly named files (e.g., "passport.pdf").
Example:
`/users/12345/id-documents/passport-2024.pdf
/users/12345/notarized-documents/contract-2024.pdf`
Same Bucket but Different Folder**:
/notarized-documents/
. Keeping all user-related data (both ID and notarized documents) together under a user's folder helps with permissions, audit logging, and easy retrieval.Example of folder structure for user 12345:
`/users/12345/id-documents/passport-2024.pdf
/users/12345/notarized-documents/legal-contract-2024.pdf`
uuid
(for Node.js) or PostgreSQL’s uuid
type.Adding email verification with OTP (One-Time Password) for credibility is a great idea:
This adds a layer of trust and ensures valid email addresses.
PostgreSQL and Prisma: Using PostgreSQL (such as Neon for serverless Postgres) with Prisma is a perfect choice to store user profiles. Prisma simplifies interactions with the database, and its ORM can map the relationships (e.g., users and their documents).
What to Store in PostgreSQL:
Here’s how you can mimic common AWS services:
IAM (Access Control):
SNS for Notifications:
Lambda (Serverless Functions):
POST /api/users/register
pending
).POST /api/users/verify-email
verified
if the OTP is correct.POST /api/documents/upload
/users/{userID}/...
path.GET /api/notarizations/:userID
start with the User Registration Flow as the foundation, and it's perfectly fine to focus on building the backend first. Backend-first development allows you to set up the core infrastructure, database, and logic without worrying about the frontend initially. Once the backend is functional, you can easily plug in the frontend, allowing Next.js to interact with the backend APIs.
That said, here’s a proposed approach for starting with 1. User Registration Flow in the backend, and then we can layer the frontend functionality afterward.
Once a user receives their OTP via email, they can submit it to verify their account.
To send the OTP email, you can use SendGrid, which has a free tier
Once the backend is ready and working:
fetch
or axios
to make API calls to the NestJS backend.
How to bring real-world trust and compliance into the Digital Notarization (DN) process. The flow of notarization in the real world is subjective, based on both the document and the identity verification of the individual. Replicating this digitally can definitely go one of two ways:
Option 1: Simple, Randomized Approval
This approach keeps things streamlined:
This would be easy to implement, and it simplifies the user journey. However, the downside is that it doesn’t add any real-world relevance or credibility to the process. It also raises security and trust concerns, especially if someone could use fake documents without proper identity verification. The lack of oversight might make it hard to market as a true "notarization" platform.
Option 2: Real-World-Like Flow with Identity Verification
This approach mimics a real-world notary experience by adding identity verification and subjective approval:
Account Creation: Users first need to create an account.
Manual Identity Verification:
Document Upload for Notarization:
Document Approval/Rejection:
Platform and Communication:
This approach adds a layer of real-world authenticity, providing a more legally sound and credible digital notarization service.
Which Approach is Better?
If you're aiming to build a real-world service that is recognized and trusted, the second option with identity verification and manual review is far superior.
Additional Considerations
Implementation Steps for Option 2
User Registration and Identity Upload:
Manual Review Interface for Notaries:
Document Notarization Submission:
Communication Layer:
Tracking and History:
This flow should give the platform a more serious and trustworthy feel, aligning it closely with how notarization works in the physical world.