Closed DAMSCROW closed 4 weeks ago
In your case, where you have multiple TypeScript (tsconfig.json) files—one in the root folder and one in the common folder—you should structure them properly to avoid conflicts.
Ideal Setup for tsconfig.json:
Root tsconfig.json (in the root of your project):
{ "compilerOptions": { "target": "es5", "lib": ["dom", "esnext"], "jsx": "react", "moduleResolution": "node", "allowSyntheticDefaultImports": true, "esModuleInterop": true, "skipLibCheck": true, "strict": true, "noEmit": true, "isolatedModules": true }, "include": ["./src", "./common"], // Include the common folder and source files "exclude": ["node_modules"] }
Include Paths: Ensure that both your src folder and common folder are included in the root tsconfig.json file.
This file handles the overall TypeScript configuration for both your app and shared components.
common/tsconfig.json (inside the common folder):
{ "extends": "../tsconfig.json", // Extends the root config "compilerOptions": { "composite": true, // Enable project references if needed "rootDir": "./src", "outDir": "./dist" }, "include": ["./src/*/"], // Include all files in the src folder inside common "exclude": ["node_modules"] }
This file extends the root tsconfig.json but can have folder-specific overrides.
Important: Use "extends": "../tsconfig.json" to inherit the settings from the root tsconfig.json while specifying additional settings relevant to the common folder (such as rootDir or outDir).
Why This Setup?
Root tsconfig.json: This controls the overall TypeScript configuration for your entire project.
common/tsconfig.json: This is specific to the common folder, allowing you to have different build or compilation options for shared code, without causing conflicts with the main project.
Debugging the Conflict:
Make sure that both tsconfig.json files do not conflict with each other. For example, avoid having two conflicting rootDir or outDir settings in both files.
Ensure the paths in both include and exclude make sense for your project structure. In your root config, you should include both the app's source files and the common folder.
Once this setup is in place, TypeScript should compile without conflicts across both your root project and shared code in the common folder.
Let me know if this helps or if you encounter any issues!
In your case, where you have multiple TypeScript (tsconfig.json) files—one in the root folder and one in the common folder—you should structure them properly to avoid conflicts.
Ideal Setup for tsconfig.json:
Root tsconfig.json (in the root of your project):
{ "compilerOptions": { "target": "es5", "lib": ["dom", "esnext"], "jsx": "react", "moduleResolution": "node", "allowSyntheticDefaultImports": true, "esModuleInterop": true, "skipLibCheck": true, "strict": true, "noEmit": true, "isolatedModules": true }, "include": ["./src", "./common"], // Include the common folder and source files "exclude": ["node_modules"] }
Include Paths: Ensure that both your src folder and common folder are included in the root tsconfig.json file.
This file handles the overall TypeScript configuration for both your app and shared components.
common/tsconfig.json (inside the common folder):
{ "extends": "../tsconfig.json", // Extends the root config "compilerOptions": { "composite": true, // Enable project references if needed "rootDir": "./src", "outDir": "./dist" }, "include": ["./src/*/"], // Include all files in the src folder inside common "exclude": ["node_modules"] }
This file extends the root tsconfig.json but can have folder-specific overrides.
Important: Use "extends": "../tsconfig.json" to inherit the settings from the root tsconfig.json while specifying additional settings relevant to the common folder (such as rootDir or outDir).
Why This Setup?
Root tsconfig.json: This controls the overall TypeScript configuration for your entire project.
common/tsconfig.json: This is specific to the common folder, allowing you to have different build or compilation options for shared code, without causing conflicts with the main project.
Debugging the Conflict:
Make sure that both tsconfig.json files do not conflict with each other. For example, avoid having two conflicting rootDir or outDir settings in both files.
Ensure the paths in both include and exclude make sense for your project structure. In your root config, you should include both the app's source files and the common folder.
Once this setup is in place, TypeScript should compile without conflicts across both your root project and shared code in the common folder.
Let me know if this helps or if you encounter any issues! module.exports = { transformer: { // Enable source maps for debugging enableBabelRCLookup: true, enableBabelRuntime: true, }, resolver: { sourceExts: ['jsx', 'js', 'ts', 'tsx'], // Make sure Metro knows about .tsx files }, };
const path = require('path');
module.exports = { // Metro needs to know where to look for additional packages or common files watchFolders: [ path.resolve(__dirname, 'packages/common'), // Add common package to the watch folders ],
resolver: { // Ensure Metro knows which file extensions to resolve sourceExts: ['tsx', 'ts', 'js', 'jsx'], },
transformer: { // Transformer settings; can add source maps or babel options if needed babelTransformerPath: require.resolve('metro-react-native-babel-transformer'), }, };
const path = require('path');
module.exports = { // Metro needs to know where to look for additional packages or common files watchFolders: [ path.resolve(__dirname, 'packages/common'), // Add common package to the watch folders ],
resolver: { // Ensure Metro knows which file extensions to resolve sourceExts: ['tsx', 'ts', 'js', 'jsx'], },
transformer: { // Transformer settings; can add source maps or babel options if needed babelTransformerPath: require.resolve('metro-react-native-babel-transformer'), }, };
"I have made the necessary changes to the files as discussed earlier. Could you please review them again and provide any further updates?"
Here’s a more formal and structured version of your update:
Project Update: Monorepository with Expo Framework (React Native & React Native Web)
Established a monorepository using the Expo framework, incorporating both React Native and React Native Web.
The repository includes two distinct applications:
Mobile Application (React Native)
Web Application (React Native Web)
A common shared project has been successfully integrated, enabling code sharing between both the mobile and web applications.
The web application has been successfully set up and is running.
However, there are some "module not found" errors that are currently being investigated and resolved to ensure smooth functionality.
A dedicated React Native project has been created within the Expo framework to support unit testing across the shared project.
Conducted an exploration of Bootstrap UI button features within the React Native environment.
Developed a foundational understanding of how to integrate Bootstrap UI components into the React Native framework.
Let me know if you need further adjustments!
Here’s a more formal and structured version of your update:
Project Update: Monorepository with Expo Framework (React Native & React Native Web)
Established a monorepository using the Expo framework, incorporating both React Native and React Native Web.
The repository includes two distinct applications:
Mobile Application (React Native)
Web Application (React Native Web)
A common shared project has been successfully integrated, enabling code sharing between both the mobile and web applications.
The web application has been successfully set up and is running.
However, there are some "module not found" errors that are currently being investigated and resolved to ensure smooth functionality.
A dedicated React Native project has been created within the Expo framework to support unit testing across the shared project.
Conducted an exploration of Bootstrap UI button features within the React Native environment.
Developed a foundational understanding of how to integrate Bootstrap UI components into the React Native framework.
Let me know if you need further adjustments!
If you're ready to limit the metro.config.js to the root folder only, here's a basic setup to avoid the "Unexpected token" error and ensure that .tsx, .ts, .js, and .jsx files are processed correctly:
Root-level metro.config.js
const path = require('path');
module.exports = { // Metro needs to know where to look for additional packages or common files watchFolders: [ path.resolve(__dirname, 'packages/common'), // Add common package to the watch folders ],
resolver: { // Ensure Metro knows which file extensions to resolve sourceExts: ['tsx', 'ts', 'js', 'jsx'], },
transformer: { // Transformer settings; can add source maps or babel options if needed babelTransformerPath: require.resolve('metro-react-native-babel-transformer'), }, };
Key Elements in this Configuration:
watchFolders: Ensures that Metro is aware of files in your packages/common directory, so it can correctly resolve them during bundling.
resolver.sourceExts: Ensures that Metro knows to process .tsx, .ts, .js, and .jsx files. This prevents the "Unexpected token" error that occurs when Metro doesn't understand how to process TypeScript files.
babelTransformerPath: This points to the default React Native Babel transformer, ensuring that your code is transformed properly, including JSX and TypeScript.
Additional Step: Clear Metro Cache
After updating metro.config.js, clear the Metro cache to ensure it's using the updated configuration:
npx react-native start --reset-cache
Optional: TypeScript Configuration
Make sure your TypeScript is properly configured as well. Your tsconfig.json should have something like this:
{ "compilerOptions": { "target": "esnext", "module": "commonjs", "jsx": "react-native", // If working with React Native "allowSyntheticDefaultImports": true, "esModuleInterop": true, "moduleResolution": "node", "skipLibCheck": true, "strict": true, "baseUrl": "./", "paths": { "@common/": ["packages/common/src/"] // Optional, if you use path aliases } }, "include": [ "apps/*/", "packages/common/src/*/" ] }
Steps to Follow:
Replace: Remove the metro.config.js from packages/common.
Update: Use the above content for your root-level metro.config.js.
Cache Clear: Run npx react-native start --reset-cache to ensure the new configuration is used.
Build & Test: Try building your app again and check if the error persists.
This setup should resolve the loader issue and ensure Metro handles your TypeScript and JSX files correctly. Let me know if the error still persists after this!
It sounds like the issue is persisting, and the "Unexpected token" along with the message about "additional loaders" points to the fact that Metro is not fully handling the TypeScript or .tsx files in your project.
Let's break down what the error means:
Loaders: In bundlers like Webpack, loaders are responsible for transforming files of different types (e.g., .ts, .tsx, .js, etc.) into something the bundler can understand (typically plain JavaScript). In Metro, loaders are referred to as transformers.
Metro uses transformers to process files. The most common transformer is the Babel transformer, which processes JavaScript and JSX code. Metro also needs to be able to handle TypeScript (.ts and .tsx) files, and it does this via the Babel transformer in combination with TypeScript settings.
What is causing the error:
The loader (transformer) is trying to process your .tsx file but encounters something that it doesn’t know how to handle (an "unexpected token"). This likely indicates that your configuration is not properly set up to handle TypeScript.
Metro's Babel transformer is capable of handling most React Native files but might not be correctly set up to process .tsx files, which is where TypeScript-specific tooling comes in.
Solution: Setup Metro's Babel Transformer to handle TypeScript
Make sure your project is using the right Babel setup to process TypeScript (.ts and .tsx), in addition to JSX. Here's how you can configure it:
First, ensure that the necessary Babel and TypeScript dependencies are installed:
npm install --save-dev @babel/preset-typescript @babel/core babel-jest metro-react-native-babel-preset
You need to configure Babel to process TypeScript along with JSX. Create or update a babel.config.js file in your root directory:
module.exports = { presets: [ 'module:metro-react-native-babel-preset', '@babel/preset-typescript', // Add this to handle TypeScript files ], };
This tells Babel (via Metro) to use:
metro-react-native-babel-preset for React Native-specific transformations.
@babel/preset-typescript for handling TypeScript files.
Make sure your tsconfig.json is properly configured to handle .tsx files:
{ "compilerOptions": { "target": "esnext", "module": "commonjs", "jsx": "react", // Use "react-native" if working with React Native "allowSyntheticDefaultImports": true, "esModuleInterop": true, "moduleResolution": "node", "strict": true, "skipLibCheck": true }, "include": [ "apps/*/", "packages/common/*/" ] }
Since you have a monorepo structure, ensure that the metro.config.js file includes packages/common in its watchFolders as discussed earlier:
const path = require('path');
module.exports = { watchFolders: [ path.resolve(__dirname, 'packages/common'), ], resolver: { sourceExts: ['ts', 'tsx', 'js', 'jsx'], }, transformer: { babelTransformerPath: require.resolve('metro-react-native-babel-transformer'), }, };
After making these changes, you should reset the Metro cache:
npx react-native start --reset-cache
Summary of What You Need:
Install @babel/preset-typescript.
Configure babel.config.js to use @babel/preset-typescript and metro-react-native-babel-preset.
Ensure that metro.config.js is set up to handle .tsx files with correct sourceExts and that the right directories are in watchFolders.
Clear the cache after making the changes using npx react-native start --reset-cache.
Next Steps:
Try these steps and check if the error is resolved.
Let me know if the issue persists, and we can dive deeper into debugging any remaining configuration issues or file-specific problems.
It sounds like the issue is persisting, and the "Unexpected token" along with the message about "additional loaders" points to the fact that Metro is not fully handling the TypeScript or .tsx files in your project.
Let's break down what the error means:
Loaders: In bundlers like Webpack, loaders are responsible for transforming files of different types (e.g., .ts, .tsx, .js, etc.) into something the bundler can understand (typically plain JavaScript). In Metro, loaders are referred to as transformers.
Metro uses transformers to process files. The most common transformer is the Babel transformer, which processes JavaScript and JSX code. Metro also needs to be able to handle TypeScript (.ts and .tsx) files, and it does this via the Babel transformer in combination with TypeScript settings.
What is causing the error:
The loader (transformer) is trying to process your .tsx file but encounters something that it doesn’t know how to handle (an "unexpected token"). This likely indicates that your configuration is not properly set up to handle TypeScript.
Metro's Babel transformer is capable of handling most React Native files but might not be correctly set up to process .tsx files, which is where TypeScript-specific tooling comes in.
Solution: Setup Metro's Babel Transformer to handle TypeScript
Make sure your project is using the right Babel setup to process TypeScript (.ts and .tsx), in addition to JSX. Here's how you can configure it:
First, ensure that the necessary Babel and TypeScript dependencies are installed:
npm install --save-dev @babel/preset-typescript @babel/core babel-jest metro-react-native-babel-preset
You need to configure Babel to process TypeScript along with JSX. Create or update a babel.config.js file in your root directory:
module.exports = { presets: [ 'module:metro-react-native-babel-preset', '@babel/preset-typescript', // Add this to handle TypeScript files ], };
This tells Babel (via Metro) to use:
metro-react-native-babel-preset for React Native-specific transformations.
@babel/preset-typescript for handling TypeScript files.
Make sure your tsconfig.json is properly configured to handle .tsx files:
{ "compilerOptions": { "target": "esnext", "module": "commonjs", "jsx": "react", // Use "react-native" if working with React Native "allowSyntheticDefaultImports": true, "esModuleInterop": true, "moduleResolution": "node", "strict": true, "skipLibCheck": true }, "include": [ "apps/*/", "packages/common/*/" ] }
Since you have a monorepo structure, ensure that the metro.config.js file includes packages/common in its watchFolders as discussed earlier:
const path = require('path');
module.exports = { watchFolders: [ path.resolve(__dirname, 'packages/common'), ], resolver: { sourceExts: ['ts', 'tsx', 'js', 'jsx'], }, transformer: { babelTransformerPath: require.resolve('metro-react-native-babel-transformer'), }, };
After making these changes, you should reset the Metro cache:
npx react-native start --reset-cache
Summary of What You Need:
Install @babel/preset-typescript.
Configure babel.config.js to use @babel/preset-typescript and metro-react-native-babel-preset.
Ensure that metro.config.js is set up to handle .tsx files with correct sourceExts and that the right directories are in watchFolders.
Clear the cache after making the changes using npx react-native start --reset-cache.
NeverThrow helps manage errors in a structured way by enforcing clear success and failure handling. It centralizes error management, improves type safety, and simplifies testing. This reduces crashes and maintains consistency across your mobile and web apps. However, it might add complexity for simpler projects or teams not familiar with it.
Since your team has experience with TypeScript, adapting to NeverThrow should be relatively straightforward. The extra setup for integrating it into a React Native project is manageable with your background in TypeScript.
Next Steps:
Try these steps and check if the error is resolved.
Let me know if the issue persists, and we can dive deeper into debugging any remaining configuration issues or file-specific problems.
It sounds like the issue is persisting, and the "Unexpected token" along with the message about "additional loaders" points to the fact that Metro is not fully handling the TypeScript or .tsx files in your project.
Let's break down what the error means:
Loaders: In bundlers like Webpack, loaders are responsible for transforming files of different types (e.g., .ts, .tsx, .js, etc.) into something the bundler can understand (typically plain JavaScript). In Metro, loaders are referred to as transformers.
Metro uses transformers to process files. The most common transformer is the Babel transformer, which processes JavaScript and JSX code. Metro also needs to be able to handle TypeScript (.ts and .tsx) files, and it does this via the Babel transformer in combination with TypeScript settings.
What is causing the error:
The loader (transformer) is trying to process your .tsx file but encounters something that it doesn’t know how to handle (an "unexpected token"). This likely indicates that your configuration is not properly set up to handle TypeScript.
Metro's Babel transformer is capable of handling most React Native files but might not be correctly set up to process .tsx files, which is where TypeScript-specific tooling comes in.
Solution: Setup Metro's Babel Transformer to handle TypeScript
Make sure your project is using the right Babel setup to process TypeScript (.ts and .tsx), in addition to JSX. Here's how you can configure it:
First, ensure that the necessary Babel and TypeScript dependencies are installed:
npm install --save-dev @babel/preset-typescript @babel/core babel-jest metro-react-native-babel-preset
You need to configure Babel to process TypeScript along with JSX. Create or update a babel.config.js file in your root directory:
module.exports = { presets: [ 'module:metro-react-native-babel-preset', '@babel/preset-typescript', // Add this to handle TypeScript files ], };
This tells Babel (via Metro) to use:
metro-react-native-babel-preset for React Native-specific transformations.
@babel/preset-typescript for handling TypeScript files.
Make sure your tsconfig.json is properly configured to handle .tsx files:
{ "compilerOptions": { "target": "esnext", "module": "commonjs", "jsx": "react", // Use "react-native" if working with React Native "allowSyntheticDefaultImports": true, "esModuleInterop": true, "moduleResolution": "node", "strict": true, "skipLibCheck": true }, "include": [ "apps/*/", "packages/common/*/" ] }
Since you have a monorepo structure, ensure that the metro.config.js file includes packages/common in its watchFolders as discussed earlier:
const path = require('path');
module.exports = { watchFolders: [ path.resolve(__dirname, 'packages/common'), ], resolver: { sourceExts: ['ts', 'tsx', 'js', 'jsx'], }, transformer: { babelTransformerPath: require.resolve('metro-react-native-babel-transformer'), }, };
After making these changes, you should reset the Metro cache:
npx react-native start --reset-cache
Summary of What You Need:
Install @babel/preset-typescript.
Configure babel.config.js to use @babel/preset-typescript and metro-react-native-babel-preset.
Ensure that metro.config.js is set up to handle .tsx files with correct sourceExts and that the right directories are in watchFolders.
Clear the cache after making the changes using npx react-native start --reset-cache.
NeverThrow helps manage errors in a structured way by enforcing clear success and failure handling. It centralizes error management, improves type safety, and simplifies testing. This reduces crashes and maintains consistency across your mobile and web apps. However, it might add complexity for simpler projects or teams not familiar with it.
Since your team has experience with TypeScript, adapting to NeverThrow should be relatively straightforward. The extra setup for integrating it into a React Native project is manageable with your background in TypeScript.
Next Steps:
Try these steps and check if the error is resolved.
Let me know if the issue persists, and we can dive deeper into debugging any remaining configuration issues or file-specific problems.
Not necessarily very difficult, but integrating NeverThrow into an application that already uses Hono and React Query might require extra work and some adjustments. Here’s why:
Current Behavior: Hono catches thrown errors and automatically converts them into HTTP responses, like 500 errors.
With NeverThrow: NeverThrow doesn't throw errors but instead returns Result objects (either ok for success or err for failure). Since Hono is designed to catch thrown errors, it won't automatically handle err results from NeverThrow.
What You’d Need to Do: You would need to manually check if the result is ok or err and then handle it. For example, if it's an err, you might need to convert it into a 500 response or pass it to Hono's error-handling system.
const fetchData = async () => { const result = await myFunction(); if (result.isErr()) { throw new Error(result.error.message); // Manually throw for Hono to catch } return result.value; };
Current Behavior: React Query catches errors that are thrown during data fetching and automatically shows error boundaries (like an error page or fallback UI).
With NeverThrow: Since NeverThrow doesn’t throw errors, React Query’s error boundaries won’t catch err results automatically. This means you’d have to manually handle the errors and notify React Query that an error occurred.
What You’d Need to Do: You could modify your React Query fetch function to return err or ok, and manually check for errors in the onError or onSuccess callbacks:
const fetchData = async () => { const result = await someApiCall(); if (result.isErr()) { throw new Error(result.error.message); // Manually throw for React Query to catch } return result.value; };
const { data, error } = useQuery('fetchData', fetchData); if (error) { console.log("Error caught by React Query"); }
Challenges:
Manual Handling: You would have to manually throw errors from err results in both Hono and React Query to ensure the existing error-handling mechanisms work as expected. This adds a bit of complexity.
Consistency: If parts of your app rely on throw for error handling (like Hono and React Query do), mixing in NeverThrow could lead to inconsistent patterns where some parts return Result objects and others throw errors. It’s important to decide on a consistent approach for error handling.
Not Impossible, but Requires Thought:
While integrating NeverThrow into an app using Hono and React Query is possible, it’s important to weigh whether the benefits of NeverThrow (clear, type-safe error handling) justify the extra work of manually adapting error handling in existing libraries that rely on throw.
In short:
It’s not "very difficult" but involves extra work.
You would need to ensure compatibility by manually handling errors from NeverThrow in both Hono and React Query.
Retrospective Improvement Point: We initially struggled with React Native Web, Expo, and Glustack UI, relying on various online resources, which complicated understanding and debugging. However, this sprint provided valuable learning opportunities as we explored and compared different frameworks. Moving forward, we should focus on building from scratch with a clear understanding of each component to enhance our efficiency and problem-solving capabilities.
Retrospective Improvement Point: We initially struggled with React Native Web, Expo, and Glustack UI, relying on various online resources, which complicated understanding and debugging. However, this sprint offered valuable learning experiences as we explored and compared different frameworks. Additionally, Sundar's implementation of an automatic version update control was an excellent initiative that significantly improved our project’s efficiency. His proactive approach to addressing project challenges is highly commendable. Moving forward, we should build from scratch with a clear understanding of each component to enhance our efficiency and problem-solving capabilities.
I reviewed the document prepared by Eldho, and it includes everything from the checklist I created during my onboarding, except for the Phoenix security document, which I had listed as one of the items. I'd like to confirm with everyone whether this document should be part of the onboarding process or handled separately. Thank you for your attention.
I will be available for the retro and planning meetings this afternoon, but I plan to take my scheduled half-day leave today. I couldn't work for some time this morning, and after the planning meeting, I may not be able to join further. I'm planning to fit both the retro and planning meetings into my available half-day. I won’t be available for anything else until later in the evening.
Initially, we faced challenges with React Native Web and related frameworks like Expo and Glustack UI, as we relied heavily on online resources, which complicated our understanding. When issues arose, it became difficult to debug and pinpoint the root cause. Moving forward, our approach is to build everything from scratch, ensuring a clear understanding of each component we include in the monorepo setup. This way, we'll have better control over the elements we're adding, which will make troubleshooting easier in the future, as opposed to struggling with unfamiliar frameworks and components.
Before getting a new API key added as a secret in AWS Secrets Manager for a separate API (not related to the existing API), here are some clarifications and questions you should consider:
Is the new API key provided and documented?
Ensure you have the exact API key that needs to be stored.
Are there any access scopes or specific permissions tied to this API key?
Clarify the permissions that the API key should grant (e.g., read, write, admin).
What should the secret name be?
Confirm a naming convention for this new secret, especially if it’s for a different API, so that it doesn’t conflict with existing secrets.
Is there a format or prefix to follow for naming?
For example, if the key is for a billing API, it could be named like billing-api-key.
Who or what will have access to this secret?
Specify which Lambda functions or services need access to the new API key. Should all the services that access the existing API key also have access to this one?
Are there any IAM policies that need to be updated?
If new permissions are required for accessing the secret, ensure the necessary IAM policies are in place.
Where will this API key be used (e.g., production, staging)?
Ensure the API key is used in the correct environment. For example, if it’s for a test API, ensure it’s stored under a separate secret for staging.
Will the new key be environment-specific?
If you're working in multiple environments, clarify if the API key will differ across environments (e.g., different keys for development, staging, and production).
Is there an expiration or rotation policy for this API key?
Ensure that the secret is configured to rotate if necessary. Some APIs require keys to be rotated periodically for security purposes.
Do we need any additional encryption requirements?
Check if there are specific encryption needs beyond what Secrets Manager provides by default.
Will the secret be fetched in the same way as existing secrets?
Will the new API key be retrieved using the same method (like getSecrets() in your code), or does it require a different retrieval process?
Will it need to follow the same refresh or caching policies as the existing secrets?
If you have a periodic refresh set up for other secrets, confirm if the new API key should follow the same refresh interval.
Should the new secret be considered critical?
If the API key is essential for the function to run, you should mark it as critical, so that the system doesn’t proceed if the key retrieval fails.
Do we need to update documentation to include this new secret?
Ensure the new secret’s use is documented properly for future reference.
Will there be any specific monitoring or alerting set up for this secret?
Consider setting up monitoring for when the key is accessed, fails to load, or approaches expiration.
Asking these clarifying questions upfront will help ensure smooth integration of the new API key into AWS Secrets Manager and prevent any issues down the line.
This new portion of the code introduces handling for secrets in local development environments by importing them from the process environment variables. Here's what the new code adds and what clarifications might be needed:
Code Analysis:
This function iterates over the keys in the SECRETS object, which is a list of expected secret keys.
It checks if each key exists in the process environment (process.env), and if so, it parses the value (assuming it is in JSON format) and logs it.
Example of an environment variable expected:
SECRETS_RANTANDRAVE = '{"uname":"username", "pwd":"password", "rantAndRaveApiKey":"apiKey"}'
Secrets are then accessed from this environment variable during local development.
It is specifically noted that these secrets should not be stored in .env.local or other environment files, ensuring they are never committed to the repository.
New Clarifications You May Need:
Clarify which secrets need to be set up in local environments (e.g., for debugging or testing purposes).
Make sure the naming convention is well-defined (e.g., SECRETS_APIKEY or SECRETS_RANTANDRAVE).
Ensure that developers know how to set up secrets in their local environments without exposing them in code or environment files.
You could ask if there's a secure central location (like 1Password or AWS SSM) from which developers can retrieve the secrets locally, rather than sharing them manually.
If the API key needs to be used in local development environments, clarify how to obtain and manage these keys.
Ensure that local access matches production as closely as possible to avoid discrepancies.
Check if there is a need to mock or stub certain secret values during local development or unit tests to avoid relying on real credentials in a non-secure environment.
Ask how rotation of secrets will be handled in local development. Will developers be informed when a key is rotated, and how will they update their local environments?
Since the environment variables are being used locally, make sure to clarify if any specific IAM policies need to be configured for services running locally (e.g., AWS CLI for local Lambda execution).
Updated Questions:
How will the API key for the new service be added to the local development environment securely, and what naming convention should we use?
Are there any additional permissions or IAM policies that need to be updated for local development environments, in addition to production?
Is there a rotation mechanism in place for local development secrets, and how will developers be informed to update their environment variables?
Can the team use any secret management tools (like AWS SSM or Secrets Manager) for safely managing local development secrets?
These additional questions will ensure that the new API key and existing secrets are handled securely in both local and production environments.
This new portion of the code introduces handling for secrets in local development environments by importing them from the process environment variables. Here's what the new code adds and what clarifications might be needed:
Code Analysis:
This function iterates over the keys in the SECRETS object, which is a list of expected secret keys.
It checks if each key exists in the process environment (process.env), and if so, it parses the value (assuming it is in JSON format) and logs it.
Example of an environment variable expected:
SECRETS_RANTANDRAVE = '{"uname":"username", "pwd":"password", "rantAndRaveApiKey":"apiKey"}'
Secrets are then accessed from this environment variable during local development.
It is specifically noted that these secrets should not be stored in .env.local or other environment files, ensuring they are never committed to the repository.
New Clarifications You May Need:
Clarify which secrets need to be set up in local environments (e.g., for debugging or testing purposes).
Make sure the naming convention is well-defined (e.g., SECRETS_APIKEY or SECRETS_RANTANDRAVE).
Ensure that developers know how to set up secrets in their local environments without exposing them in code or environment files.
You could ask if there's a secure central location (like 1Password or AWS SSM) from which developers can retrieve the secrets locally, rather than sharing them manually.
If the API key needs to be used in local development environments, clarify how to obtain and manage these keys.
Ensure that local access matches production as closely as possible to avoid discrepancies.
Check if there is a need to mock or stub certain secret values during local development or unit tests to avoid relying on real credentials in a non-secure environment.
Ask how rotation of secrets will be handled in local development. Will developers be informed when a key is rotated, and how will they update their local environments?
Since the environment variables are being used locally, make sure to clarify if any specific IAM policies need to be configured for services running locally (e.g., AWS CLI for local Lambda execution).
Updated Questions:
How will the API key for the new service be added to the local development environment securely, and what naming convention should we use?
Are there any additional permissions or IAM policies that need to be updated for local development environments, in addition to production?
Is there a rotation mechanism in place for local development secrets, and how will developers be informed to update their environment variables?
Can the team use any secret management tools (like AWS SSM or Secrets Manager) for safely managing local development secrets?
These additional questions will ensure that the new API key and existing secrets are handled securely in both local and production environments.
Chris, I know you're busy with all the GPT-3 work, but whenever you have a moment, could you help clarify a couple of things about the secret implementation for the Raindrops API?
First, should the new API key secret be classified as critical? Based on the wiki, if an API key is vital for functionality, we should mark it as critical to prevent the system from proceeding if the key architecture fails. Does that apply here as well?
Secondly, is there a specific naming convention we need to follow for the secret? What should the secret name be? Is there a required format or prefix?
Sure! Here's a reworded version of your message:
Hi Aydemoy,
I’ve made the necessary changes for the IAM policy and secret manager permissions, deployed the code, and merged it with our combined branch. The latest code is now deployed in the dev environment (dev1). Initially, Chaitra had replaced our branch in dev1, but I reverted it back to our code. The infrastructure has also been deployed using my branch, as it includes changes for the secret manager and IAM policy updates.
However, when I tried testing our URL in dev1, I encountered an appconfig error. While debugging, I noticed that our API URL is missing from AWS AppConfig. You may have added it in the CDK code, but it seems like it might not have been deployed. Could you please check and confirm?
Otherwise, everything else seems to be in place.
Let me know if you'd like any further adjustments!
Sure! Here's a reworded version of your message:
Hi Aydemoy,
I’ve made the necessary changes for the IAM policy and secret manager permissions, deployed the code, and merged it with our combined branch. The latest code is now deployed in the dev environment (dev1). Initially, Chaitra had replaced our branch in dev1, but I reverted it back to our code. The infrastructure has also been deployed using my branch, as it includes changes for the secret manager and IAM policy updates.
However, when I tried testing our URL in dev1, I encountered an appconfig error. While debugging, I noticed that our API URL is missing from AWS AppConfig. You may have added it in the CDK code, but it seems like it might not have been deployed. Could you please check and confirm?
Otherwise, everything else seems to be in place.
Let me know if you'd like any further adjustments!
Here’s a more refined and expanded version of your PR description:
PR Description:
This pull request introduces a new secret management system for securely storing the Raindrop API key. A new key has been created in the secret manager, specifically designed to hold the PR key required for accessing the Raindrop API. The Lambda function will be updated in a future pull request to retrieve and utilize this API key from the secret manager, aligning with the broader scope of story EMBER-315.
In this current PR, the focus is on setting up a secure storage mechanism for the Raindrop API key, ensuring that sensitive information is safeguarded according to best practices. The necessary code changes have been made to support this secure storage, laying the groundwork for future integration with the Lambda function.
Please note that the Lambda function update will be addressed in a separate PR as part of the ongoing development for EMBER-315.
Let me know if you'd like any further adjustments!
The error you're encountering, "SyntaxError: Cannot use import statement outside a module", typically occurs when Jest is not properly set up to handle ES module imports (import statements) or when certain JavaScript/TypeScript transformations are not applied.
Let's break down the problem and its potential solutions:
By default, Jest does not support ES modules (import/export statements) without additional configuration. Jest expects CommonJS syntax (require, module.exports), so you need to tell Jest how to handle these modern JavaScript features.
You are likely using modern ES modules syntax (import from 'react-native'), which Jest is unable to understand unless it’s properly configured to transpile the code.
It looks like Jest is not transforming your TypeScript or ES module files properly. To resolve this, you can either use ts-jest (to handle TypeScript) or babel-jest (to handle modern JavaScript syntax like ES modules).
Solutions
Option 1: Using babel-jest
Since you're using TypeScript with React Native and ES modules, you'll need to ensure Babel is configured to transpile your test files correctly.
npm install --save-dev babel-jest @babel/preset-env @babel/preset-typescript @babel/preset-react
module.exports = { presets: [ '@babel/preset-env', // Transpile ES modules and modern JavaScript '@babel/preset-typescript', // Handle TypeScript '@babel/preset-react', // Handle React JSX syntax ], };
module.exports = {
preset: 'react-native', // For React Native
transform: {
'^.+\.(js|jsx|ts|tsx)$': 'babel-jest', // Use babel-jest to transpile files
},
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx'],
testEnvironment: 'jsdom',
setupFilesAfterEnv: ['
Option 2: Using ts-jest
If you'd rather use ts-jest to handle both TypeScript and JavaScript transformations, follow these steps:
npm install --save-dev ts-jest @types/jest
module.exports = {
preset: 'ts-jest',
testEnvironment: 'jsdom',
transform: {
'^.+\.(ts|tsx)$': 'ts-jest', // Use ts-jest to handle TypeScript
},
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx'],
setupFilesAfterEnv: ['
{ "compilerOptions": { "module": "commonjs", // Use commonjs for Jest compatibility "esModuleInterop": true, "jsx": "react", // Enable JSX for React } }
Option 3: transformIgnorePatterns Configuration
Sometimes, Jest ignores certain folders like node_modules during transformation, but React Native packages often require transformation.
In your jest.config.ts, ensure you're properly transforming React Native and Expo modules:
transformIgnorePatterns: [ 'node_modules/(?!(react-native|react-native-web|@react-native|expo|@expo|@unimodules)/)', // Allow transforming RN and Expo packages ],
After making these changes, you should test your Jest setup:
Run Jest using:
jest --no-cache --verbose
This will give you a more detailed output and help ensure the files are being processed correctly.
Summary:
Ensure Jest is correctly configured to handle ES modules and TypeScript.
Use babel-jest or ts-jest to transpile your code.
If you encounter issues with third-party modules (like React Native), ensure they are properly transformed using the transformIgnorePatterns option.
Let me know how it goes after applying the changes!
module.exports = {
preset: 'react-native',
transformIgnorePatterns: [
'node_modules/(?!(@react-native|react-native|react-native-js-polyfills)/)', // Transform React Native modules
],
transform: {
'^.+\.(js|jsx|ts|tsx)$': 'babel-jest', // Use babel-jest to transpile your files
},
setupFilesAfterEnv: ['
Since you are using node-polyfill-webpack-plugin, it is designed specifically to provide Node.js polyfills in webpack projects for web environments. However, in a React Native environment, especially with Expo, Webpack is not used for mobile builds; instead, React Native uses Metro Bundler. This means that using node-polyfill-webpack-plugin may cause issues, as it assumes a Node.js environment that is not directly available in React Native.
Here’s how you can address the usage of node-polyfill-webpack-plugin in your project, including how to manage it with Babel and Metro.
Key Considerations for React Native:
React Native's runtime environment (JavaScriptCore/Hermes) does not natively support Node.js APIs (like path, fs, etc.).
Metro Bundler (used by React Native) doesn't handle Webpack plugins.
If you’re targeting Expo for Web, Webpack is used for web builds, so node-polyfill-webpack-plugin might still be relevant there, but not for mobile.
Steps to Handle node-polyfill-webpack-plugin in React Native:
In React Native mobile builds, the polyfill is unnecessary because React Native doesn't have access to Node.js modules like fs, path, or Buffer natively. If you don't need Node.js APIs in your mobile build, you should either:
Remove node-polyfill-webpack-plugin for the mobile environment.
Conditionally apply it only when building for web (Expo for Web).
Example: Apply Polyfill Only in Web Environment
You can conditionally apply node-polyfill-webpack-plugin in a web-specific config:
In your webpack.config.js (for web builds only):
const NodePolyfillPlugin = require('node-polyfill-webpack-plugin');
module.exports = { // Other webpack configurations... plugins: [ new NodePolyfillPlugin(), // Use only in web build ], };
Ensure the plugin is not applied in the mobile context since React Native doesn't use Webpack.
Since Metro doesn't use Webpack or its plugins, you need to resolve any Node.js-specific modules or polyfills differently.
If your React Native app is accidentally importing Node.js-specific APIs (like path, fs, Buffer), you may need to mock or replace them. To do this, you can adjust your Metro configuration.
Example: Add Node.js Mocks in Metro Bundler
In metro.config.js:
module.exports = {
resolver: {
extraNodeModules: {
// Provide mocks or polyfills for Node.js modules if needed
path: require.resolve('path-browserify'),
fs: require.resolve('memfs'), // Or another mock for fs
buffer: require.resolve('buffer/'),
},
},
};
This configuration ensures that if any Node.js modules are referenced in your project, Metro will replace them with appropriate polyfills or browser-compatible modules.
If you're encountering syntax issues with the polyfill or its dependencies, ensure Babel is configured correctly to transpile the polyfill for React Native.
In your babel.config.js:
module.exports = { presets: ['module:metro-react-native-babel-preset'], plugins: [ // Add any necessary Babel plugins to transpile polyfill packages ], };
If you're using a shared codebase for both web and mobile in a monorepo, you may want to add environment-specific Babel configurations:
module.exports = function (api) { api.cache(true); const isWeb = api.env('web');
return { presets: [ 'module:metro-react-native-babel-preset', // Additional presets for web environment if needed ...(isWeb ? ['@babel/preset-env'] : []), ], }; };
When targeting Expo for Web, Webpack is used to bundle the project, and node-polyfill-webpack-plugin will be required to ensure Node.js APIs are available in the browser. In this case, you can:
Ensure Polyfills for Web: Configure Webpack to include node-polyfill-webpack-plugin when building the web version.
Test Webpack Config: Make sure the Webpack configuration applies the node-polyfill-webpack-plugin plugin without impacting mobile.
In your webpack.config.js (specific to web builds):
const NodePolyfillPlugin = require('node-polyfill-webpack-plugin');
module.exports = { // Web-specific configurations... plugins: [ new NodePolyfillPlugin(), ], };
If your project includes libraries that rely on Node.js APIs (like path, fs, etc.), you can mock these libraries using tools like Jest for testing or Metro for development.
In your jest.config.js, add the necessary mocks for Node.js-specific modules:
module.exports = {
setupFiles: ['
If your project or third-party libraries rely on Node.js-specific features that aren’t natively available in React Native, you might need to:
Refactor the library to remove or conditionally use those Node.js-specific features.
Replace those dependencies with React Native-compatible alternatives (e.g., react-native-fs for file system operations).
In Summary:
Mobile (React Native): node-polyfill-webpack-plugin is not applicable and should be excluded for React Native mobile builds. Configure Metro to handle any missing modules or replace Node.js-specific APIs with browser-compatible or mock implementations.
Web (Expo for Web): Use node-polyfill-webpack-plugin in Webpack for web builds, but ensure it is not applied in mobile contexts.
Mock or Polyfill Node.js-specific APIs: If certain Node.js APIs are necessary, mock or polyfill them in the web and React Native environments using tools like path-browserify, memfs, and buffer.
This approach will ensure that your project works correctly across mobile and web environments, especially in a monorepo setup.
Since you are using node-polyfill-webpack-plugin, it is designed specifically to provide Node.js polyfills in webpack projects for web environments. However, in a React Native environment, especially with Expo, Webpack is not used for mobile builds; instead, React Native uses Metro Bundler. This means that using node-polyfill-webpack-plugin may cause issues, as it assumes a Node.js environment that is not directly available in React Native.
Here’s how you can address the usage of node-polyfill-webpack-plugin in your project, including how to manage it with Babel and Metro.
Key Considerations for React Native:
React Native's runtime environment (JavaScriptCore/Hermes) does not natively support Node.js APIs (like path, fs, etc.).
Metro Bundler (used by React Native) doesn't handle Webpack plugins.
If you’re targeting Expo for Web, Webpack is used for web builds, so node-polyfill-webpack-plugin might still be relevant there, but not for mobile.
Steps to Handle node-polyfill-webpack-plugin in React Native:
In React Native mobile builds, the polyfill is unnecessary because React Native doesn't have access to Node.js modules like fs, path, or Buffer natively. If you don't need Node.js APIs in your mobile build, you should either:
Remove node-polyfill-webpack-plugin for the mobile environment.
Conditionally apply it only when building for web (Expo for Web).
Example: Apply Polyfill Only in Web Environment
You can conditionally apply node-polyfill-webpack-plugin in a web-specific config:
In your webpack.config.js (for web builds only):
const NodePolyfillPlugin = require('node-polyfill-webpack-plugin');
module.exports = { // Other webpack configurations... plugins: [ new NodePolyfillPlugin(), // Use only in web build ], };
Ensure the plugin is not applied in the mobile context since React Native doesn't use Webpack.
Since Metro doesn't use Webpack or its plugins, you need to resolve any Node.js-specific modules or polyfills differently.
If your React Native app is accidentally importing Node.js-specific APIs (like path, fs, Buffer), you may need to mock or replace them. To do this, you can adjust your Metro configuration.
Example: Add Node.js Mocks in Metro Bundler
In metro.config.js:
module.exports = {
resolver: {
extraNodeModules: {
// Provide mocks or polyfills for Node.js modules if needed
path: require.resolve('path-browserify'),
fs: require.resolve('memfs'), // Or another mock for fs
buffer: require.resolve('buffer/'),
},
},
};
This configuration ensures that if any Node.js modules are referenced in your project, Metro will replace them with appropriate polyfills or browser-compatible modules.
If you're encountering syntax issues with the polyfill or its dependencies, ensure Babel is configured correctly to transpile the polyfill for React Native.
In your babel.config.js:
module.exports = { presets: ['module:metro-react-native-babel-preset'], plugins: [ // Add any necessary Babel plugins to transpile polyfill packages ], };
If you're using a shared codebase for both web and mobile in a monorepo, you may want to add environment-specific Babel configurations:
module.exports = function (api) { api.cache(true); const isWeb = api.env('web');
return { presets: [ 'module:metro-react-native-babel-preset', // Additional presets for web environment if needed ...(isWeb ? ['@babel/preset-env'] : []), ], }; };
When targeting Expo for Web, Webpack is used to bundle the project, and node-polyfill-webpack-plugin will be required to ensure Node.js APIs are available in the browser. In this case, you can:
Ensure Polyfills for Web: Configure Webpack to include node-polyfill-webpack-plugin when building the web version.
Test Webpack Config: Make sure the Webpack configuration applies the node-polyfill-webpack-plugin plugin without impacting mobile.
In your webpack.config.js (specific to web builds):
const NodePolyfillPlugin = require('node-polyfill-webpack-plugin');
module.exports = { // Web-specific configurations... plugins: [ new NodePolyfillPlugin(), ], };
If your project includes libraries that rely on Node.js APIs (like path, fs, etc.), you can mock these libraries using tools like Jest for testing or Metro for development.
In your jest.config.js, add the necessary mocks for Node.js-specific modules:
module.exports = {
setupFiles: ['
If your project or third-party libraries rely on Node.js-specific features that aren’t natively available in React Native, you might need to:
Refactor the library to remove or conditionally use those Node.js-specific features.
Replace those dependencies with React Native-compatible alternatives (e.g., react-native-fs for file system operations).
In Summary:
Mobile (React Native): node-polyfill-webpack-plugin is not applicable and should be excluded for React Native mobile builds. Configure Metro to handle any missing modules or replace Node.js-specific APIs with browser-compatible or mock implementations.
Web (Expo for Web): Use node-polyfill-webpack-plugin in Webpack for web builds, but ensure it is not applied in mobile contexts.
Mock or Polyfill Node.js-specific APIs: If certain Node.js APIs are necessary, mock or polyfill them in the web and React Native environments using tools like path-browserify, memfs, and buffer.
This approach will ensure that your project works correctly across mobile and web environments, especially in a monorepo setup.
Jmodule.exports = {
setupFiles: ['
Hey Frankie, thanks for giving me a heads up! Even though we didn’t work together for long, I genuinely enjoyed the time we spent collaborating. You’re definitely one of the coolest clients I’ve had the pleasure of working with. Your approach to managing tasks and keeping things on track is really something else, and I admire your quick wit. Wishing you all the best in whatever comes next!
{ "status": 400, "error": "Bad Request", "message": "Invalid input: 'email' is required and must be a valid email address.", "code": "ERR_INVALID_EMAIL" }
PR Description: Secure Storage Implementation for Raindrop API Key via Secret Manager
This pull request introduces a secure method for storing and accessing the Raindrop API key by utilizing AWS Secret Manager. The primary goal is to enhance security by preventing the exposure of sensitive credentials within the codebase.
Key Changes:
Creation of a New Secret Manager Key:
A new secret has been created in AWS Secret Manager to store the Raindrop API key securely.
This approach ensures that the API key is encrypted at rest and only accessible by authorized services.
Code Updates for API Key Retrieval:
Modifications have been made to enable the application to retrieve the API key from Secret Manager at runtime.
By fetching the key programmatically, we eliminate hard-coded credentials and reduce the risk of unauthorized access.
Note:
The Lambda function that utilizes the Raindrop API will be updated in a separate pull request associated with Story EMBR-315.
This upcoming PR will focus on modifying the Lambda function to use the new secure retrieval mechanism for the API key.
Separating these changes allows for isolated testing and more straightforward code reviews.
Benefits:
Enhanced Security: Storing the API key in Secret Manager protects it from potential leaks and unauthorized usage.
Compliance: Aligns with security best practices and compliance requirements for handling sensitive information.
Scalability: Facilitates easier key rotations and updates without the need to alter the codebase.
Next Steps:
Merge this PR to integrate the secure storage mechanism.
Proceed with updating the Lambda function in the subsequent PR under Story EMBR-315.
Testing:
Ensure that the application can successfully retrieve the API key from Secret Manager in a development environment.
Verify that all existing functionalities dependent on the Raindrop API remain unaffected.
By implementing these changes, we strengthen our application's security posture and lay the groundwork for more secure interactions with external APIs.
Showcase Summary: Raindrop Employee Search API Integration with Secure API Key Management
In this sprint, we initiated our first integration with Raindrop, one of the UK's leading pension finders, to enhance our ability to locate missing pensions. Specifically, we implemented the Raindrop Employee Search API, which allows users to search for employees based on a company or employer name.
Key Implementation Details:
Raindrop API Selection:
We have chosen the Raindrop Employee Search API to be the first point of integration.
This API enables users to input an employer's name and retrieve a list of matching employees in a dropdown, streamlining the search process. For example, entering "Standard" might return results like "Standard Life" and other related employers.
Backend Lambda API:
A new Lambda function has been created to integrate with the Raindrop Employee Search API.
This Lambda serves as the backend, accepting an input string from the frontend (e.g., an employer’s name) and querying the Raindrop API for matching results.
The Lambda function validates the input, ensuring at least three characters are entered before triggering the API call.
Secure Storage of API Key:
Raindrop provided an encrypted API key, which is securely stored in AWS Secret Manager.
This key is retrieved dynamically at runtime, ensuring it is never hard-coded into our system.
To comply with security best practices, the API key and Content-Type header are sent with each API request. These are securely handled, further minimizing risks of exposure.
Validation & Response:
The API validates the input to ensure a minimum of three characters are provided.
Once validated, the Lambda queries the Raindrop API, fetching and returning all matching employee records.
This integration showcases our commitment to security and efficiency, utilizing AWS Secret Manager to safeguard sensitive credentials and ensuring seamless backend integration with Raindrop's Employee Search API. The implementation of secure storage for the Raindrop API key is a foundational step, with future updates planned for other Raindrop API integrations.
This concludes our first API integration with Raindrop, setting the stage for more advanced features in upcoming sprints.
The message in the image suggests moving a health check route to a more secure internal route and adding a Web Application Firewall (WAF) rule to restrict access. Here's an approach you could follow:
Move the current health check from /healthcheck to /internal/healthcheck.
In your route.ts file, add a new route handler in internal.ts to handle requests to /internal/healthcheck.
Example code:
app.get('/internal/healthcheck', (req, res) => { // Health check logic res.status(200).send('Service is healthy'); });
Define a WAF rule to allow only authorized IPs (e.g., SLAL network or specific IP ranges) to access the /internal/* route.
This can be implemented in your cloud platform (AWS, Azure, GCP, etc.) to ensure that only internal or trusted IPs can reach the /internal/healthcheck endpoint.
Example WAF Rule:
Condition: Allow traffic only from specific IP ranges (e.g., your internal network or SLAL).
Action: Block requests from any other IPs for /internal/* routes.
Once the WAF rule is in place, you can allow external customers to access the /healthcheck endpoint through Ember.
Ensure that no sensitive information is exposed, and that rate-limiting or other security measures (such as authorization) are considered for external health checks if needed.
After implementing the changes, thoroughly test the new /internal/healthcheck route to ensure that:
It functions correctly.
It is accessible only from the authorized IP ranges defined in your WAF rule.
This approach ensures that the /internal/healthcheck is protected while external customers are able to use /healthcheck.
Move the current health check from /healthcheck
to /internal/healthcheck
. Update the routing logic in your route.ts
file and add a new route handler in internal.ts
to handle requests for /internal/healthcheck
.
app.get('/internal/healthcheck', (req, res) => {
// Health check logic
res.status(200).send('Service is healthy');
});
## Approach to Moving and Securing the Healthcheck Route
### 2. Set Up WAF Rule
To restrict access to the `/internal/*` path, configure a Web Application Firewall (WAF) rule that limits access only to authorized IPs (e.g., SLAL network or specific IP ranges).
- **Condition:** Allow traffic only from authorized IP ranges (internal network or SLAL).
- **Action:** Block any requests to the `/internal/*` routes coming from unauthorized IPs.
This WAF rule can be implemented using your cloud provider (AWS, Azure, GCP, etc.).
### 3. Lift External Restriction for `/healthcheck`
Once the `/internal/healthcheck` is secured via WAF, you can lift the restriction on the original `/healthcheck` route to allow external customers access. However, ensure no sensitive information is exposed on this route, and consider additional security measures such as:
- Rate limiting
- Authorization checks
### 4. Testing
After these changes, test the new `/internal/healthcheck` route to ensure:
- The route works as expected.
- Access is limited to only authorized IP ranges as defined by your WAF rule.
To add a new WAF rule for restricting access to your internal health check route, you can follow the same pattern as in the existing WAF rule creation shown in the screenshots. Here's an example of how to add a new WAF rule for the /internal/healthcheck route:
CDK Code for New WAF Rule (restricting IPs for /internal/healthcheck)
In your waf-rules.ts, you can create a new WAF rule similarly to the existing one:
export function constructInternalHealthCheckWhitelistRule(ipSetArn: string, priority: number): CfnWebACL.RuleProperty { return { name: 'AllowInternalHealthCheckIPs', priority: priority, action: { allow: {} }, visibilityConfig: { sampledRequestsEnabled: true, cloudWatchMetricsEnabled: true, metricName: 'Metric-AllowInternalHealthCheckIPs', }, statement: { ipSetReferenceStatement: { arn: ipSetArn, }, }, }; }
In your waf-construct.ts file:
export function createWAFForInternalHealthCheck(stack: BaseStack, wafName: string, ipSet: CfnIPSet): CfnWebACL { const allowInternalHealthCheckRule = constructInternalHealthCheckWhitelistRule(ipSet.attrArn, 1); // Set the priority here
// Create API Gateway WAF for internal health check
const internalHealthCheckWaf = new CfnWebACL(stack, wafName, {
scope: 'REGIONAL',
defaultAction: { block: {} },
visibilityConfig: {
cloudWatchMetricsEnabled: true,
metricName: ${stack.getRegionalLevelResourceName(wafName)}-Metric
,
sampledRequestsEnabled: true,
},
rules: [allowInternalHealthCheckRule], // Adding the internal health check rule here
});
return internalHealthCheckWaf; }
Steps:
Define the IPSet: You will need to define an IPSet that contains the internal IP ranges allowed to access the /internal/healthcheck route.
Create the WAF Rule: The new rule constructInternalHealthCheckWhitelistRule restricts access to the IPs allowed in the IPSet.
Add the WAF Rule: The rule is applied in the createWAFForInternalHealthCheck function. It blocks all access except from the IPs in the whitelist.
Make sure to adjust priority values and wafName accordingly when integrating this with your existing code.
Let me know if you need any further adjustments!
To add a new WAF rule for restricting access to your internal health check route, you can follow the same pattern as in the existing WAF rule creation shown in the screenshots. Here's an example of how to add a new WAF rule for the /internal/healthcheck route:
CDK Code for New WAF Rule (restricting IPs for /internal/healthcheck)
In your waf-rules.ts, you can create a new WAF rule similarly to the existing one:
export function constructInternalHealthCheckWhitelistRule(ipSetArn: string, priority: number): CfnWebACL.RuleProperty { return { name: 'AllowInternalHealthCheckIPs', priority: priority, action: { allow: {} }, visibilityConfig: { sampledRequestsEnabled: true, cloudWatchMetricsEnabled: true, metricName: 'Metric-AllowInternalHealthCheckIPs', }, statement: { ipSetReferenceStatement: { arn: ipSetArn, }, }, }; }
In your waf-construct.ts file:
export function createWAFForInternalHealthCheck(stack: BaseStack, wafName: string, ipSet: CfnIPSet): CfnWebACL { const allowInternalHealthCheckRule = constructInternalHealthCheckWhitelistRule(ipSet.attrArn, 1); // Set the priority here
// Create API Gateway WAF for internal health check
const internalHealthCheckWaf = new CfnWebACL(stack, wafName, {
scope: 'REGIONAL',
defaultAction: { block: {} },
visibilityConfig: {
cloudWatchMetricsEnabled: true,
metricName: ${stack.getRegionalLevelResourceName(wafName)}-Metric
,
sampledRequestsEnabled: true,
},
rules: [allowInternalHealthCheckRule], // Adding the internal health check rule here
});
return internalHealthCheckWaf; }
Steps:
Define the IPSet: You will need to define an IPSet that contains the internal IP ranges allowed to access the /internal/healthcheck route.
Create the WAF Rule: The new rule constructInternalHealthCheckWhitelistRule restricts access to the IPs allowed in the IPSet.
Add the WAF Rule: The rule is applied in the createWAFForInternalHealthCheck function. It blocks all access except from the IPs in the whitelist.
Make sure to adjust priority values and wafName accordingly when integrating this with your existing code.
Let me know if you need any further adjustments!
export function constructInternalHealthCheckWhitelistRule(ipSetArn: string, priority: number): CfnWebACL.RuleProperty { return { name: 'AllowInternalHealthCheckIPs', priority: priority, action: { allow: {} }, visibilityConfig: { sampledRequestsEnabled: true, cloudWatchMetricsEnabled: true, metricName: 'Metric-AllowInternalHealthCheckIPs', }, statement: { andStatement: { statements: [ { ipSetReferenceStatement: { arn: ipSetArn } }, { byteMatchStatement: { fieldToMatch: { uriPath: {} // This tells WAF to check the URI path. }, positionalConstraint: 'STARTS_WITH', // Check if the URI starts with the provided string. searchString: '/internal', // The string to match. textTransformations: [{ priority: 0, type: 'NONE' }] } } ] } } }; }
export function constructInternalRouteWhitelistRule(ipSetArn: string, priority: number): CfnWebACL.RuleProperty { return { name: 'AllowInternalIPsForInternalRoutes', priority: priority, action: { allow: {}, }, visibilityConfig: { sampledRequestsEnabled: true, cloudWatchMetricsEnabled: true, metricName: 'Metric.AllowInternalIPsForInternalRoutes', }, overrideAction: { none: {}, }, statement: { andStatement: { statements: [ { ipSetReferenceStatement: { arn: ipSetArn, // Reference to the authorized IP set }, }, { byteMatchStatement: { fieldToMatch: { uriPath: {}, // Inspect the URI path of the request }, positionalConstraint: 'STARTS_WITH', // Allow any route starting with /internal/ searchString: '/internal/', // Match the /internal/* routes textTransformations: [ { priority: 0, type: 'NONE', }, ], }, }, ], }, }, }; }
Here’s a description you can use for the pull request:
Title: Implement Internal Hono Route and WAF Restriction for Healthcheck
Description: This PR includes the following changes:
Created a new /internal/hono route for internal service access.
Moved the existing health check endpoint to this newly created internal route to ensure it is not exposed externally.
Applied an explicit WAF rule to restrict access to the /internal/hono route, allowing access only from internal network sources (e.g., SLAs).
These changes enhance the security of the health check endpoint by limiting access to authorized internal traffic.
Let me know if you need any adjustments!
export function constructInternalRouteBlockRule() { return { visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'Metric.RestrictInternalRoutes', }, defaultAction: { block: {}, // Block by default unless explicitly allowed }, rules: [ // Rule 1: Allow requests that DO NOT contain /internal/* { name: "AllowNonInternalRoutes", priority: 1, action: { allow: {}, // Allow the request }, statement: { notStatement: { statement: { byteMatchStatement: { fieldToMatch: { uriPath: {}, // Inspect the URI path of the request }, positionalConstraint: "STARTS_WITH", searchString: "/internal/", // Match requests that start with /internal/ textTransformations: [ { priority: 0, type: "NONE", }, ], }, }, }, }, visibilityConfig: { cloudWatchMetricsEnabled: true, sampledRequestsEnabled: true, }, },
// Rule 2: Allow requests to /internal/* only from authorized IPs
{
name: "AllowInternalIPWithInternalRoute",
priority: 2,
action: {
allow: {}, // Allow the request if it matches the IP set
},
statement: {
andStatement: {
statements: [
{
ipSetReferenceStatement: {
arn: internalIPSetArn, // Reference to the authorized IP set
},
},
{
byteMatchStatement: {
fieldToMatch: {
uriPath: {}, // Match the URI path of the request
},
positionalConstraint: "STARTS_WITH",
searchString: "/internal/", // Match /internal/* routes
textTransformations: [
{
priority: 0,
type: "NONE",
},
],
},
},
],
},
},
visibilityConfig: {
cloudWatchMetricsEnabled: true,
sampledRequestsEnabled: true,
},
},
],
}; }
Got it! You want to create two individual methods, each responsible for creating one of the two rules separately. One method will handle the rule that allows non-/internal/ routes, and the other will handle the rule for allowing /internal/ routes from authorized IPs.
Here’s how you can structure it:
Method 1: Allow requests that do not contain /internal/* in the URL path (irrespective of the IP).
Method 2: Allow requests to /internal/* only if the request comes from an IP in the allowed IP set.
Implementation Example:
Method 1: Allow Non-Internal Routes
export function constructAllowNonInternalRoutesRule() { return { name: "AllowNonInternalRoutes", priority: 1, action: { allow: {}, // Allow the request }, statement: { notStatement: { statement: { byteMatchStatement: { fieldToMatch: { uriPath: {}, // Inspect the URI path of the request }, positionalConstraint: "STARTS_WITH", searchString: "/internal/", // Match requests that start with /internal/ textTransformations: [ { priority: 0, type: "NONE", }, ], }, }, }, }, visibilityConfig: { cloudWatchMetricsEnabled: true, sampledRequestsEnabled: true, }, }; }
Method 2: Allow Internal Routes for Authorized IPs
export function constructAllowInternalIPWithInternalRouteRule(ipSetArn: string) { return { name: "AllowInternalIPWithInternalRoute", priority: 2, action: { allow: {}, // Allow the request if it matches the IP set }, statement: { andStatement: { statements: [ { ipSetReferenceStatement: { arn: ipSetArn, // Pass the IP set ARN as a parameter }, }, { byteMatchStatement: { fieldToMatch: { uriPath: {}, // Match the URI path of the request }, positionalConstraint: "STARTS_WITH", searchString: "/internal/", // Match /internal/* routes textTransformations: [ { priority: 0, type: "NONE", }, ], }, }, ], }, }, visibilityConfig: { cloudWatchMetricsEnabled: true, sampledRequestsEnabled: true, }, }; }
Explanation:
This method constructs the rule that allows any request not containing /internal/* in the URI path.
It uses a notStatement that will match all requests except those starting with /internal/.
This method constructs the rule that allows requests to /internal/* only from authorized IP addresses.
The method accepts the ipSetArn as a parameter, so it dynamically references the IP set that contains the list of authorized IPs.
Usage:
You can now call these methods separately to add them to your WAF configuration:
const allowNonInternalRoutesRule = constructAllowNonInternalRoutesRule(); const allowInternalIPWithInternalRouteRule = constructAllowInternalIPWithInternalRouteRule('arn:aws:wafv2:us-west-2:123456789012:regional/ipset/my-ip-set');
// Add them to the WAF configuration const wafConfig = { visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'MyWAFMetric', }, defaultAction: { block: {}, // Block by default }, rules: [ allowNonInternalRoutesRule, // Allow non-internal routes allowInternalIPWithInternalRouteRule, // Allow internal routes for authorized IPs ], };
Benefits:
Separation of Logic: Each rule has its own method, making the code more modular and easier to maintain.
Reusability: You can reuse or modify each method individually based on future requirements.
Dynamic Configuration: You can pass different ipSetArn values to the second method, allowing flexibility in managing IP sets for different environments.
Final Setup:
constructAllowNonInternalRoutesRule will allow any route that does not start with /internal/.
constructAllowInternalIPWithInternalRouteRule will allow routes to /internal/* only for authorized IP addresses.
Let me know if this works for your case!
Got it! You want to create two individual methods, each responsible for creating one of the two rules separately. One method will handle the rule that allows non-/internal/ routes, and the other will handle the rule for allowing /internal/ routes from authorized IPs.
Here’s how you can structure it:
Method 1: Allow requests that do not contain /internal/* in the URL path (irrespective of the IP).
Method 2: Allow requests to /internal/* only if the request comes from an IP in the allowed IP set.
Implementation Example:
Method 1: Allow Non-Internal Routes
export function constructAllowNonInternalRoutesRule() { return { name: "AllowNonInternalRoutes", priority: 1, action: { allow: {}, // Allow the request }, statement: { notStatement: { statement: { byteMatchStatement: { fieldToMatch: { uriPath: {}, // Inspect the URI path of the request }, positionalConstraint: "STARTS_WITH", searchString: "/internal/", // Match requests that start with /internal/ textTransformations: [ { priority: 0, type: "NONE", }, ], }, }, }, }, visibilityConfig: { cloudWatchMetricsEnabled: true, sampledRequestsEnabled: true, }, }; }
Method 2: Allow Internal Routes for Authorized IPs
export function constructAllowInternalIPWithInternalRouteRule(ipSetArn: string) { return { name: "AllowInternalIPWithInternalRoute", priority: 2, action: { allow: {}, // Allow the request if it matches the IP set }, statement: { andStatement: { statements: [ { ipSetReferenceStatement: { arn: ipSetArn, // Pass the IP set ARN as a parameter }, }, { byteMatchStatement: { fieldToMatch: { uriPath: {}, // Match the URI path of the request }, positionalConstraint: "STARTS_WITH", searchString: "/internal/", // Match /internal/* routes textTransformations: [ { priority: 0, type: "NONE", }, ], }, }, ], }, }, visibilityConfig: { cloudWatchMetricsEnabled: true, sampledRequestsEnabled: true, }, }; }
Explanation:
This method constructs the rule that allows any request not containing /internal/* in the URI path.
It uses a notStatement that will match all requests except those starting with /internal/.
This method constructs the rule that allows requests to /internal/* only from authorized IP addresses.
The method accepts the ipSetArn as a parameter, so it dynamically references the IP set that contains the list of authorized IPs.
Usage:
You can now call these methods separately to add them to your WAF configuration:
const allowNonInternalRoutesRule = constructAllowNonInternalRoutesRule(); const allowInternalIPWithInternalRouteRule = constructAllowInternalIPWithInternalRouteRule('arn:aws:wafv2:us-west-2:123456789012:regional/ipset/my-ip-set');
// Add them to the WAF configuration const wafConfig = { visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'MyWAFMetric', }, defaultAction: { block: {}, // Block by default }, rules: [ allowNonInternalRoutesRule, // Allow non-internal routes allowInternalIPWithInternalRouteRule, // Allow internal routes for authorized IPs ], };
Benefits:
Separation of Logic: Each rule has its own method, making the code more modular and easier to maintain.
Reusability: You can reuse or modify each method individually based on future requirements.
Dynamic Configuration: You can pass different ipSetArn values to the second method, allowing flexibility in managing IP sets for different environments.
Final Setup:
constructAllowNonInternalRoutesRule will allow any route that does not start with /internal/.
constructAllowInternalIPWithInternalRouteRule will allow routes to /internal/* only for authorized IP addresses.
Let me know if this works for your case!
Here are the JSON representations of the two AWS WAF rules:
Rule 1: Allow Non-Internal Routes
This rule allows requests that do not contain /internal/* in the URL path, regardless of the IP address.
{ "Name": "AllowNonInternalRoutes", "Priority": 1, "Action": { "Allow": {} }, "Statement": { "NotStatement": { "Statement": { "ByteMatchStatement": { "FieldToMatch": { "UriPath": {} }, "PositionalConstraint": "STARTS_WITH", "SearchString": "/internal/", "TextTransformations": [ { "Priority": 0, "Type": "NONE" } ] } } } }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AllowNonInternalRoutesMetric" } }
Rule 2: Allow Internal Routes for Authorized IPs
This rule allows requests to /internal/* only if the request comes from an IP in the allowed IP set. This IP set reference should be passed using the IPSetReferenceStatement.
{ "Name": "AllowInternalIPWithInternalRoute", "Priority": 2, "Action": { "Allow": {} }, "Statement": { "AndStatement": { "Statements": [ { "IPSetReferenceStatement": { "ARN": "arn:aws:wafv2:region:account-id:ipset/ipset-name/ipset-id" } }, { "ByteMatchStatement": { "FieldToMatch": { "UriPath": {} }, "PositionalConstraint": "STARTS_WITH", "SearchString": "/internal/", "TextTransformations": [ { "Priority": 0, "Type": "NONE" } ] } } ] } }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AllowInternalIPWithInternalRouteMetric" } }
Key Fields:
Here are the JSON representations of the two AWS WAF rules:
Rule 1: Allow Non-Internal Routes
This rule allows requests that do not contain /internal/* in the URL path, regardless of the IP address.
{ "Name": "AllowNonInternalRoutes", "Priority": 1, "Action": { "Allow": {} }, "Statement": { "NotStatement": { "Statement": { "ByteMatchStatement": { "FieldToMatch": { "UriPath": {} }, "PositionalConstraint": "STARTS_WITH", "SearchString": "/internal/", "TextTransformations": [ { "Priority": 0, "Type": "NONE" } ] } } } }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AllowNonInternalRoutesMetric" } }
Rule 2: Allow Internal Routes for Authorized IPs
This rule allows requests to /internal/* only if the request comes from an IP in the allowed IP set. This IP set reference should be passed using the IPSetReferenceStatement.
{ "Name": "AllowInternalIPWithInternalRoute", "Priority": 2, "Action": { "Allow": {} }, "Statement": { "AndStatement": { "Statements": [ { "IPSetReferenceStatement": { "ARN": "arn:aws:wafv2:region:account-id:ipset/ipset-name/ipset-id" } }, { "ByteMatchStatement": { "FieldToMatch": { "UriPath": {} }, "PositionalConstraint": "STARTS_WITH", "SearchString": "/internal/", "TextTransformations": [ { "Priority": 0, "Type": "NONE" } ] } } ] } }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "AllowInternalIPWithInternalRouteMetric" } }
Key Fields:
ARN in Rule 2: Replace "arn:aws:wafv2:region:account-id:ipset/ipset-name/ipset-id" with the actual ARN of your IP set.
Priority: The priority of the rules is set as 1 for the first rule and 2 for the second. Ensure the priority order is correct.
Action: Both rules have the Allow action, but your Web ACL should have a default action of Block to block any traffic that does not match either rule.
Next Steps:
Copy these JSON rules.
Go to the AWS Management Console -> WAF -> Your Web ACL -> Add rules.
Choose the option to use JSON format and paste the respective rule JSONs.
This setup will ensure that requests are handled as per the logic you outlined:
Any request not starting with /internal/ is allowed.
Any request to /internal/* is allowed only if it comes from an authorized IP set.
Let me know if you need any further adjustments!
Here’s a refined version of the problem description you can use for your start-up call:
During the process of implementing the ConstructInternalRouteWhiteListRule, we encountered an issue with how it interacted with an existing rule. The objective of this new rule was to restrict access to internal IPs when a specific route containing /internal/* was detected in the request path.
Initially, we had an existing rule in place that only allowed access to APIs if the request came from an internal route, effectively blocking all other external routes. However, as part of our testing, this rule was expected to be removed as the application moves to production.
When I attempted to test the newly created rule by removing the existing rule, we encountered a problem. Since the Web Application Firewall (WAF) has a default action of blocking all incoming requests unless explicitly allowed, without the existing rule, all incoming requests were blocked. The new rule alone was insufficient to process requests because it was set up to allow access based on specific IPs and paths, but no complementary rule existed to allow other non-internal routes.
This situation led us to realize that we needed to create two distinct rules:
One rule that allows non-internal routes, bypassing the need for IP checking.
Another rule that permits access to internal routes only if the request originates from an approved IP set.
These two rules would ensure a balance between internal and external access, with the proper fallback to block unwanted requests based on specific conditions.
This explanation provides clarity on the situation and decision-making process regarding the WAF rule setup and the need for the two new rules. Let me know if any adjustments are needed!
Got it! Since the default action for the WAF is allow, the rules will need to explicitly block requests that do not meet the criteria (such as being part of the internal IP set for /internal/* routes).
Here’s how the rules will be updated:
Internal Route Rule: Block requests to /internal/* if the IP is not part of the internal IP set.
External Route Rule: Allow requests that do not match /internal/* without any IP checks.
const productionWafRules = [ // Internal Route Rule for Production (Block if not part of internal IP set) new wafv2.CfnRuleGroup.RuleProperty({ name: 'ProductionInternalRouteBlockRule', priority: 1, action: { block: {} }, // Block if not part of the IP set statement: { andStatement: { statements: [ { byteMatchStatement: { searchString: '/internal/', fieldToMatch: { uriPath: {} }, textTransformations: [{ priority: 0, type: 'NONE' }] } }, { notStatement: { statement: { ipSetReferenceStatement: { arn: 'arn:aws:wafv2:region:account-id:ipset/internal-ip-set' // Replace with internal IP set ARN } } } } ] } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'ProdInternalRouteBlockMetric', sampledRequestsEnabled: true, } }),
// External Route Rule for Production (Allow for all other routes) new wafv2.CfnRuleGroup.RuleProperty({ name: 'ProductionExternalRouteAllowRule', priority: 2, action: { allow: {} }, // Allow for non-internal routes statement: { notStatement: { statement: { byteMatchStatement: { searchString: '/internal/', fieldToMatch: { uriPath: {} }, textTransformations: [{ priority: 0, type: 'NONE' }] } } } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'ProdExternalRouteAllowMetric', sampledRequestsEnabled: true, } }) ];
Internal Route Rule: Block requests to /internal/* if the IP is not part of the internal IP set.
External Route Rule: Block external routes (requests not matching /internal/*) unless the IP is part of a specific external IP set.
const devWafRules = [ // Internal Route Rule for Non-Production (Block if not part of internal IP set) new wafv2.CfnRuleGroup.RuleProperty({ name: 'DevInternalRouteBlockRule', priority: 1, action: { block: {} }, // Block if not part of the IP set statement: { andStatement: { statements: [ { byteMatchStatement: { searchString: '/internal/', fieldToMatch: { uriPath: {} }, textTransformations: [{ priority: 0, type: 'NONE' }] } }, { notStatement: { statement: { ipSetReferenceStatement: { arn: 'arn:aws:wafv2:region:account-id:ipset/internal-ip-set' // Replace with internal IP set ARN } } } } ] } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'DevInternalRouteBlockMetric', sampledRequestsEnabled: true, } }),
// External Route Rule for Non-Production (Block if not part of external IP set) new wafv2.CfnRuleGroup.RuleProperty({ name: 'DevExternalRouteBlockRule', priority: 2, action: { block: {} }, // Block if not part of external IP set statement: { andStatement: { statements: [ { notStatement: { statement: { byteMatchStatement: { searchString: '/internal/', fieldToMatch: { uriPath: {} }, textTransformations: [{ priority: 0, type: 'NONE' }] } } }, { notStatement: { statement: { ipSetReferenceStatement: { arn: 'arn:aws:wafv2:region:account-id:ipset/external-ip-set' // Replace with external IP set ARN } } } } } ] } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'DevExternalRouteBlockMetric', sampledRequestsEnabled: true, } }) ];
Explanation:
ProductionInternalRouteBlockRule: Blocks requests to /internal/* if the IP is not in the internal IP set.
ProductionExternalRouteAllowRule: Allows requests to routes that are not /internal/*.
DevInternalRouteBlockRule: Blocks requests to /internal/* if the IP is not in the internal IP set.
DevExternalRouteBlockRule: Blocks requests to non-/internal/* routes if the IP is not in the external IP set.
Key Points:
Production:
/internal/* routes: Block if not in the internal IP set.
Other routes: Always allowed.
Non-Production:
/internal/* routes: Block if not in the internal IP set.
External routes: Block if not in the external IP set.
Next Steps:
Replace the ARN values with the correct IP set ARNs for your environment.
Deploy these rules based on your environment (production or non-production).
This setup ensures that only requests matching the /internal/* pattern and coming from the internal IP set are allowed, while the default for other routes is allow. Let me know if you need any further adjustments!
Got it! Since the default action for the WAF is allow, the rules will need to explicitly block requests that do not meet the criteria (such as being part of the internal IP set for /internal/* routes).
Here’s how the rules will be updated:
Internal Route Rule: Block requests to /internal/* if the IP is not part of the internal IP set.
External Route Rule: Allow requests that do not match /internal/* without any IP checks.
const productionWafRules = [ // Internal Route Rule for Production (Block if not part of internal IP set) new wafv2.CfnRuleGroup.RuleProperty({ name: 'ProductionInternalRouteBlockRule', priority: 1, action: { block: {} }, // Block if not part of the IP set statement: { andStatement: { statements: [ { byteMatchStatement: { searchString: '/internal/', fieldToMatch: { uriPath: {} }, textTransformations: [{ priority: 0, type: 'NONE' }] } }, { notStatement: { statement: { ipSetReferenceStatement: { arn: 'arn:aws:wafv2:region:account-id:ipset/internal-ip-set' // Replace with internal IP set ARN } } } } ] } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'ProdInternalRouteBlockMetric', sampledRequestsEnabled: true, } }),
// External Route Rule for Production (Allow for all other routes) new wafv2.CfnRuleGroup.RuleProperty({ name: 'ProductionExternalRouteAllowRule', priority: 2, action: { allow: {} }, // Allow for non-internal routes statement: { notStatement: { statement: { byteMatchStatement: { searchString: '/internal/', fieldToMatch: { uriPath: {} }, textTransformations: [{ priority: 0, type: 'NONE' }] } } } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'ProdExternalRouteAllowMetric', sampledRequestsEnabled: true, } }) ];
Internal Route Rule: Block requests to /internal/* if the IP is not part of the internal IP set.
External Route Rule: Block external routes (requests not matching /internal/*) unless the IP is part of a specific external IP set.
const devWafRules = [ // Internal Route Rule for Non-Production (Block if not part of internal IP set) new wafv2.CfnRuleGroup.RuleProperty({ name: 'DevInternalRouteBlockRule', priority: 1, action: { block: {} }, // Block if not part of the IP set statement: { andStatement: { statements: [ { byteMatchStatement: { searchString: '/internal/', fieldToMatch: { uriPath: {} }, textTransformations: [{ priority: 0, type: 'NONE' }] } }, { notStatement: { statement: { ipSetReferenceStatement: { arn: 'arn:aws:wafv2:region:account-id:ipset/internal-ip-set' // Replace with internal IP set ARN } } } } ] } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'DevInternalRouteBlockMetric', sampledRequestsEnabled: true, } }),
// External Route Rule for Non-Production (Block if not part of external IP set) new wafv2.CfnRuleGroup.RuleProperty({ name: 'DevExternalRouteBlockRule', priority: 2, action: { block: {} }, // Block if not part of external IP set statement: { andStatement: { statements: [ { notStatement: { statement: { byteMatchStatement: { searchString: '/internal/', fieldToMatch: { uriPath: {} }, textTransformations: [{ priority: 0, type: 'NONE' }] } } }, { notStatement: { statement: { ipSetReferenceStatement: { arn: 'arn:aws:wafv2:region:account-id:ipset/external-ip-set' // Replace with external IP set ARN } } } } } ] } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'DevExternalRouteBlockMetric', sampledRequestsEnabled: true, } }) ];
Explanation:
ProductionInternalRouteBlockRule: Blocks requests to /internal/* if the IP is not in the internal IP set.
ProductionExternalRouteAllowRule: Allows requests to routes that are not /internal/*.
DevInternalRouteBlockRule: Blocks requests to /internal/* if the IP is not in the internal IP set.
DevExternalRouteBlockRule: Blocks requests to non-/internal/* routes if the IP is not in the external IP set.
Key Points:
Production:
/internal/* routes: Block if not in the internal IP set.
Other routes: Always allowed.
Non-Production:
/internal/* routes: Block if not in the internal IP set.
External routes: Block if not in the external IP set.
Next Steps:
Replace the ARN values with the correct IP set ARNs for your environment.
Deploy these rules based on your environment (production or non-production).
This setup ensures that only requests matching the /internal/* pattern and coming from the internal IP set are allowed, while the default for other routes is allow. Let me know if you need any further adjustments!
Here’s how we can adjust the rule definitions based on your updated requirements:
Internal Route Rule: Blocks requests to /internal/* if the IP is not part of the internal IP set.
Global Block Rule: Blocks all requests if the IP is not in the internal IP set (this will be removed post-go-live).
const productionWafRules = [ // Internal Route Rule (Block if not part of internal IP set) new wafv2.CfnRuleGroup.RuleProperty({ name: 'ProductionInternalRouteBlockRule', priority: 1, action: { block: {} }, // Block requests to /internal/* if IP is not in the internal IP set statement: { andStatement: { statements: [ { byteMatchStatement: { searchString: '/internal/', fieldToMatch: { uriPath: {} }, textTransformations: [{ priority: 0, type: 'NONE' }] } }, { notStatement: { statement: { ipSetReferenceStatement: { arn: 'arn:aws:wafv2:region:account-id:ipset/internal-ip-set' // Replace with internal IP set ARN } } } } ] } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'ProdInternalRouteBlockMetric', sampledRequestsEnabled: true, } }),
// Global Block Rule (Block all if not part of internal IP set) new wafv2.CfnRuleGroup.RuleProperty({ name: 'ProductionGlobalBlockRule', priority: 2, action: { block: {} }, // Block all requests if IP is not in the internal IP set statement: { notStatement: { statement: { ipSetReferenceStatement: { arn: 'arn:aws:wafv2:region:account-id:ipset/internal-ip-set' // Replace with internal IP set ARN } } } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'ProdGlobalBlockMetric', sampledRequestsEnabled: true, } }) ];
Internal Route Rule: Blocks requests to /internal/* if the IP is not part of the internal IP set.
Existing Allow Rule: Allows traffic from IPs that are part of an existing IP set.
const devWafRules = [ // Internal Route Rule (Block if not part of internal IP set) new wafv2.CfnRuleGroup.RuleProperty({ name: 'DevInternalRouteBlockRule', priority: 1, action: { block: {} }, // Block requests to /internal/* if IP is not in the internal IP set statement: { andStatement: { statements: [ { byteMatchStatement: { searchString: '/internal/', fieldToMatch: { uriPath: {} }, textTransformations: [{ priority: 0, type: 'NONE' }] } }, { notStatement: { statement: { ipSetReferenceStatement: { arn: 'arn:aws:wafv2:region:account-id:ipset/internal-ip-set' // Replace with internal IP set ARN } } } } ] } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'DevInternalRouteBlockMetric', sampledRequestsEnabled: true, } }),
// Existing Allow IP Rule new wafv2.CfnRuleGroup.RuleProperty({ name: 'DevAllowIPRule', priority: 2, action: { allow: {} }, // Allow requests from a specific IP set statement: { ipSetReferenceStatement: { arn: 'arn:aws:wafv2:region:account-id:ipset/external-ip-set' // Replace with existing external IP set ARN } }, visibilityConfig: { cloudWatchMetricsEnabled: true, metricName: 'DevAllowIPMetric', sampledRequestsEnabled: true, } }) ];
Explanation of the Rule Updates:
ProductionInternalRouteBlockRule: Blocks requests to /internal/* if the IP is not in the internal IP set.
ProductionGlobalBlockRule: Blocks all requests if the IP is not in the internal IP set (temporary rule until go-live).
Post-go-live, you'll remove the ProductionGlobalBlockRule, so only /internal/* routes will have IP restrictions.
DevInternalRouteBlockRule: Blocks requests to /internal/* if the IP is not in the internal IP set.
DevAllowIPRule: Allows requests from a predefined IP set (existing external IP set).
Key Differences:
Production: Default is allow. We block /internal/* if IPs are not internal and also block everything else based on IPs (until go-live).
Non-Production: Default is block. /internal/* routes are blocked unless from an internal IP, and external routes are allowed only for a specific IP set.
Next Steps:
Replace the IP set ARNs with your correct internal and external IP sets.
In production, you'll eventually remove the ProductionGlobalBlockRule once the application goes live.
This setup ensures that internal routes are strictly controlled by the internal IP set, while production allows external traffic temporarily, and non-production is more restrictive by default.
{ name: 'restrictInternalRoutesToInternalIPs', priority: priority, action: { block: {} // Block the request if the condition matches }, visibilityConfig: { sampledRequestsEnabled: true, cloudWatchMetricsEnabled: true, metricName: 'Metric.RestrictInternalRoutesToInternalIP', }, statement: { andStatement: { statements: [ { notStatement: { // Add NOT logic to block non-authorized IPs statement: { ipSetReferenceStatement: { arn: internalIPSetArn, // Reference to the authorized IP set }, }, }, }, { byteMatchStatement: { fieldToMatch: { uriPath: {}, // Inspect the URI path of the request }, positionalConstraint: 'STARTS_WITH', // Allow any route starting with /internal/ searchString: '/internal/', // Match the /internal/ routes textTransformations: [ { priority: 0, type: 'NONE', }, ], }, }, ], }, }, }
Hi Chris,
Hilton asked me to prepare an agenda for tomorrow’s demo to share with Frankie. Eldho used to handle this before the showcase and share it with Frankie, but since he is unavailable, he asked me to prepare it this time. I’ve discussed the items to be showcased with the team and drafted the agenda. I’ll send it over for your review—please feel free to make any modifications. Just wanted to confirm with you if you’re okay with me handling this.
Thanks!
Here’s the updated version with the additional details:
I’ve been planning my schedule for October, as we already have permission to work from the office on certain days. However, I want to ensure I make the most of this in an organized way. My main challenge with coming to the office is the distance I have to travel, so when I do come in, I prefer to work a full day rather than come briefly.
Though I have not raised a single continuous request for the entire month of October, I have submitted requests for the specific weeks where I feel I need to work from home due to a large number of meetings, such as demos and customer showcases. For the weeks where I plan to come into the office, I haven’t raised any requests, as I intend to work from the office on those days. I have split my requests into three separate ones based on my availability.
Since I’m still getting familiar with the process, having joined SLP in July, I wanted to clarify my approach. Please review the situation and let me know if this plan works for you. It’s essential that I balance my time between home and the office effectively, especially on days with lighter schedules where commuting makes more sense.
Thank you for your time and understanding. I look forward to your feedback.
This includes the details about splitting the requests into three parts. Let me know if you need further revisions!