Open Dongw1126 opened 2 years ago
Having the exact same issue. I've tried to use the image URL directly but I regularly get 403s unless the entire bucket is public
related to #6935
same here.
It's been a while, @nadetastic.
Any progress on this?
Hi @kimfucious - we are working on this feature, and have identified a couple of options for moving forward. We will provide updates on this ticket when we have timelines figured out!
Hi @abdallahshaban557,
Thanks for the follow up.
Something to consider:
As it stands, we need to do a const s3Url = await Storage.get(imageKey)
to get the URL for an image in storage.
As far as I can tell, there is no way for the browser to cache these, which results in unnecessary fetches. I could be wrong.
I wound up creating a caching mechanism to deal with this, but--to be blunt--we really shouldn't have to jump through such hoops.
I'll share it here, in case someone comes across this thread and might find use for it.
import React, { createContext, useContext, useEffect } from "react";
import { Storage } from "aws-amplify";
import chalk from "chalk";
import config from "../../config/config.json"
const isDebug = config.site.IS_DEBUG;
const isDev = process.env.NODE_ENV === "development";
interface ImageCacheContextType {
getImageWithCache: (id: string, imageUrl: string) => Promise<string>;
}
const ImageCacheContext = createContext<ImageCacheContextType | null>(null);
export const useImageCache = (): ImageCacheContextType => {
const context = useContext(ImageCacheContext);
if (!context) {
throw new Error(
"useImageCache must be used within an ImageCacheProvider"
);
}
return context;
};
interface ImageCacheProviderProps {
children: React.ReactNode;
}
const imageCache: Record<string, string> = {};
export default function ImageCacheProvider({
children,
}: ImageCacheProviderProps): JSX.Element {
async function getImageWithCache(
id: string,
imageKey: string
): Promise<string> {
if (imageCache[id]) {
if (isDebug || isDev) {
console.log(chalk.green("Image cache hit!"));
}
return imageCache[id];
} else {
if (isDebug || isDev) {
console.log(chalk.yellow("Image cache miss!"));
}
const s3Url = await Storage.get(imageKey);
const response = await fetch(s3Url);
const blob = await response.blob();
const objectUrl = URL.createObjectURL(blob);
imageCache[id] = objectUrl;
return objectUrl;
}
}
useEffect(() => {
return () => {
for (const id in imageCache) {
if (imageCache.hasOwnProperty(id)) {
URL.revokeObjectURL(imageCache[id]);
}
}
};
}, []);
const contextValue: ImageCacheContextType = {
getImageWithCache,
};
return (
<ImageCacheContext.Provider value={contextValue}>
{children}
</ImageCacheContext.Provider>
);
}
I look forward to a solution that allows us to store a non-expiring URLs somewhere.
Hi @kimfucious - As part of enabling this, we are also considering providing you with an ability to cache all your content behind a CDN such as Cloudfront. So we are thinking of it as well!
Great to hear @abdallahshaban557.
Kindly consider that we're not all hosted on Amazon.
While I have apps that are, this particular app is hosted on Vercel.
Hi @abdallahshaban557,
Here's another scenario to consider:
<meta property="og:image" content="https://myapp-storage.s3.us-region-2.amazonaws.com/images/thumb-123.png">
@kimfucious - just to make sure I understand, can you elaborate on that please?
HI @abdallahshaban557,
Thanks for the follow-up. I realize my comment was a bit vague in hindsight. Let me try to clarify.
We are talking about non-expiring URLs that are retrieved via the Storage.get(key,{options})
pattern.
If we are in a Next.js app, and we are dynamically generating routes/pages, and we want those routes/pages to have an og:image
meta tag, we want a URL that does not expire.
We can use the NPM package next-seo
to populate a meta tag like the one below if we have a non-expiring URL.
<meta property="og:image" content="https://<non-expiring-url-to-article-image>">
The easiest way to do this would be--I could be wrong--when the image is uploaded to Storage
, we can get a non-expiring URL, using Storage.get()
right after the PUT
, and save that somewhere (e.g. in a db).
Another way would be to handle this in something like getStaticProps
/getServerSideProps
.
Or it could be a little of both, regardless...
As it stands, any URLs we put in meta tags with the above methods will expire. If someone sends a link to a page with an expired URL, the preview is image broken, and Google search results will not show the preview image either.
I know that I could just create a public S3 bucket and not be hindered by such things, but I'm trying to work within the Amplify way, assuming these are best practices.
@dabit3 is a master at this stuff, so he may have some ideas.
Hi @kimfucious - that makes sense! We really appreciate all this feedback!
My app is also dependent upon this, in a similar way to @kimfucious: I have avatars for users which are stored in my S3 Amplify Storage in a bucket that has public access. I need to pass the URLs for these avatars to a 3rd party service (integrated into my app) which stores them in their database. Because the URLs expire so quickly, the avatars do not work. Rather than make my bucket publicly-facing, I would like to get non-expiring URLs from Storage.get()
.
So, @abdallahshaban557 I would like to confirm: is your upcoming solution (or part of it) that we'll be able to get the public URL from S3 for a resource that is publicly facing?
I'm asking because @kimfucious said "I know that I could just create a public S3 bucket [...]" and I'm wondering if that will be "the Amplify way" as part of your solution?
Hi, @DaryBeattie,
I'm asking because @kimfucious said "I know that I could just create a public S3 bucket [...]" and I'm wondering if that will be "the Amplify way" as part of your solution?
While I can't respond for the Amplify team, I'll my two cents here for clarity:
Kindly confirm, @abdallahshaban557.
Hi @kimfucious and @DaryBeattie - we actually want to support a solution that works for both! So you either can use a public S3 bucket that we can then enable through Amplify Storage, or if you are an existing Amplify Storage developer - giving you a new prefix that creates long lived public facing URLs.
Let me know your thoughts!
Hi @abdallahshaban557,
Thanks for the follow up.
A solution that handles both sounds great!
I hope this happens soon.
Hi @kimfucious and @DaryBeattie - we actually want to support a solution that works for both! So you either can use a public S3 bucket that we can then enable through Amplify Storage, or if you are an existing Amplify Storage developer - giving you a new prefix that creates long lived public facing URLs.
Let me know your thoughts!
Please add this feature, it will be very useful!
One more usecase for having public s3 URL from Storage.get
: The signed URLs with access tokens are way to long. It's breaking Stripe's image URL length limit (2048 chars). As things stand, Amplify does not support sending image URL to Stripe.
StripeInvalidRequestError: Invalid URL: URL must be 2048 characters or less.
Hi @abdallahshaban557, in the meantime this is getting fundamentally fixed, is it possible to increase the max value for the expires
attribute to some higher number? Currently, the max value is 1 hour but something significantly higher can be helpful.
await Storage.get(imageKey, { expires: 86400 });
EDIT: No longer needed in favor of the workaround I posted below
FWIW, here's the workaround I did until this feature gets implemented. Somewhat hacky but is forward compatible once the actual feature arrives.
Make the public folder of the Amplify Storage S3 bucket publicly accessible. (urgh... but should have the same side effect as creating a separate public facing S3 bucket)
Create a custom plugin for Storage that returns a non-signed url of the asset.
import { Storage, StorageProvider } from "@aws-amplify/storage";
const BUCKET_NAME = "yourbucketname";
const REGION = "your region";
class TempStorageProvider implements StorageProvider {
static category = "Storage";
static providerName = "TempStorage";
get(key: string): Promise<String> {
const url = `https://${BUCKET_NAME}.s3.${REGION}.amazonaws.com/public/${key}`;
return new Promise((res) => res(url));
}
getCategory(): string {
return TempStorageProvider.category;
}
getProviderName(): string {
return TempStorageProvider.providerName;
}
configure = Storage.configure;
put = Storage.put;
remove = Storage.remove;
list = Storage.list;
}
Storage.addPluggable(new TempStorageProvider());
Storage.get
api with extra configconst url = await Storage.get(key, { provider: "TempStorage" });
Once the feature arrives, you can simply do the following steps to make it "properly" work:
I am also looking for solution for this scenario. Any update on this @abdallahshaban557 regarding the timeline?
Hello @himanshugupta0007 - we do not have an update yet. We will keep this issue updated once we have made more progress in this area.
Any updates here? Also hoping to get a nonexpiring URL for some of my bucket content.
@abdallahshaban557 ?
Hey guys, any progress on this one? Having the same issue on a Nuxt App. I want to use S3 images on og:image
tags
Hello, any updates on this?
Also, is there a workaround for this using Coudfront?
Is this related to a new or existing framework?
No response
Is this related to a new or existing API?
Storage
Is this related to another service?
S3
Describe the feature you'd like to request
I want to get only public object URL when I use Storage.get. Currently, I receive only signed URL.
I'm going to use S3 as an image storage that all users can write and read. However, because of the signed URL, the URL changes every time users refresh, so image caching doesn't work and resources are wasted.
Describe the solution you'd like
I'd like to get a public object URL without signature from Storage.get. I wish amplify would provide an option for public objects.
Describe alternatives you've considered
It would be nice if the signed URL doesn't change on every refresh.
Additional context
No response
Is this something that you'd be interested in working on?