Closed mtrezza closed 5 years ago
I am also facing same issue on AWS i used AWS for almost 1 yrs but i am moving to digitalocean . the problem with aws is they charge for bandwidth even for everything like hard disk etc also but digitalocean has fixed and now i am able to reduce my cost to 60% in comparison to AWS.
@agwl-saurabh What I don't understand is why the costs are significantly less on Heroku. How can PaaS be more expensive than IaaS when Heroku even runs on AWS. If they have some bandwidth discount they got a pretty good deal with AWS saving ~70% of costs.
very simple answer to this when you use AWS. u actually start using lot of unwanted services which really add up the cost for e.g I was using parse on AWS elastic bean(by one click button on parse example) and when i was on free tire i was not paying for loadbalancer data transfer and lot more was on free and suddenly after my free tire my bill becomes almost 2.5 what i was paying on free tire. FYI i was using two large ec2 and one medium ec2 . even in one yrs my load on server decreased but cost increased :).
so after lot of research and two months running on digitalocean i found it cheapest option.
I already tried Google could also.
without seeing your console i will not able to tell you exact issue. I will suggest take paid support from AWS and get there engineer on call and understand this it will not cost you more then 100$ for one time..
but on Heroku you use very limited services.
The higher costs for me come specifically from bandwidth:
Possible causes:
http://localhost/parse
as serverUrl
instead of a domain or IP? It is not https
but if it's used only for cloud code to call parse server that shouldn't be a problem. publicServerUrl
is https://example.com/parse
.directAccess=false
) are routed through the public IP
https://bucketname.s3.amazonaws.com...
. When I ssh
into an EC2 instance and do a nslookup
it resolves to a public IP. Luckily there is a baseUrl
parameter, once I figure out the static private IP of the bucket.Using directAccess=false
forces the server to load the file into memory on each request, thus increasing the latency, instance's memory & CPU requirements, and double the bandwidth usage.
Also if you are behind an elastic load balancer, this will increase the cost. See LCU Pricing Details.
I highly recommend that you keepdirectAccess=true
, you will need fewer instances and will offload a considerable amount of work to s3. Since S3 is designed to scale massively.
Problems will become more apparent once you start dealing with files larger than a couple of megabytes.
@georgesjamous Good point, I have now setup a CDN instead of allowing direct access to the S3 bucket, to keep the bucket private and reduce the bandwidth costs.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Costs are down considerably after:
After moving from Heroku to AWS, the costs of running Parse Server have significantly increased (more than 3 fold).
Heroku: On average 4 dynos
STANDARD 2X
AWS: On average 5 instancest2.small
on Elastic Beanstalk with Application Load BalancerIn both cases
directAccess=false
AWS Cost Explorer shows that >50% of the costs come from
DataTransfer-Out-Bytes
.The files on S3 are routed through parse server, i.e. the ELB and EC2 instances when requested by a client. Could it be possible that when a client request a file from parse server, the data somehow goes from S3 to the internet gateway and back to the EC3 instance (0.09$/GB) instead of directly to the EC2 instance (0$)?
On the other hand the Cost Explorer shows that the S3 bucket has only a minimal cost effect, >50% come from the ELB.
I have no clue as to what is causing this cost difference - could it be a misconfiguration of Parse Sever on AWS?