QuadrupleA
Good. The first S in S3 is simple, despite nobody in software appreciating simplicity anymore.

Adding features makes documentation more complicated, makes the tech harder to learn, makes libraries bigger, likely harms performance a bit, increases bug surface area, etc.

When it gets too out of hand, people will paper it over with a new, simpler abstraction layer, and the process starts again, only with a layer of garbage spaghetti underneath.

Show your age and be proud, Simple Storage Service.

KaiserPro
S3 was/is optimised for a specific bunch of use cases.

Because EC2's hypervisor was (when it was launched) lacking features (no hot swap, not shared block storage, no host movements, no online backup/clone, no live recovery, no Highly available host failover ) S3 had to step in to pick up some of the slack that proper block or file storage would have taken.

For better or for worse, people adopted the ephemeral style of computing, and used s3 as the state store.

S3 got away with it because it was the only practical object store in town.

The biggest drawback that is still has (and will likely always have) is that you can't write parts to a file. its either replace or nothing.

But that's by design.

So I suspect it'll stay like that.

kylehotchkiss
I want somewhere reasonably priced to keep my files where I can confidently know they'll be there a decade from now.

For that purpose, I think S3's age is a killer feature. I won't be surprised when we see the HN post "google cloud storage has been sent to the graveyard"

andyjohnson0
The mere fact that abstractions like S3 even exist still boggles my mind. Infinitely scalable, indefinitely persistent, inexpensive, super-high reliability, software-addressable storage accessible from (almost) anywhere on the planet. I'm sure tfa's critique is valid, but also... we have miraculous tools.
laurencerowe
The lack of If-Match/If-None-Match preconditions in S3 is definitely frustrating.
fred_is_fred
The title should have been "S3 is missing features others have". There are no references that I saw to "Product age" causing his concerns.
dabinat
I really wish S3 had the ability to rename / move files without having to copy the data all over again. It seems like something that should not be necessary, given that that information is just metadata.

Even if there is some technical reason why the data needs copying, S3 could at least pretend that the file is in the new place until it’s actually there.

watermelon0
> S3 doesn’t have dual-region or multi-region buckets

This is true, but S3 does support replication (including deletion markers), and even 2-way replication, between two regions. Definitely not the same thing as a dual-region bucket, but it can satisfy many use cases where dual-region bucket would be used otherwise.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/replic...

estebarb
Append would allow to build a lot of other systems. I mean, the only functional difference between S3 and GFS is append operation. Google build BigTable, Megastore and who knows what more over GFS. You can't do the same with S3 (without having to implement the append somewhere else yourself).
menacingly
Good. This is a part of the stack I want to be boring. I don't need innovative features.

It's the actual federal hate crime that are the egress costs I could do without.

mythz
S3 does simple, reliable object storage well. If anything its draconian pricing model for charging exuberant pricing for bandwidth is showing its age and why we've moved to Cloudflare R2 for their zero egress fees.

Being able to reuse the s3 command-line and existing S3 libraries has made the migration painless, so I'm thankful they've created a defacto standard that other S3 compatible object storage providers can implement.

pryz
> By embracing DynamoDB as your metadata layer, systems stand to gain a lot.

Yes yes yes. However, DynamoDB can be expensive very quickly :]

niuzeta
This is perfectly fine. S3 is simple and stable, and that's their selling point. There's no competition to the history and the proven stability rather than chasing shiny features.
personomas
S3 _does_ support CAS on copyObject [0].

Cloudflare supports CAS on copyObject and on putObject [1]. It doesn't support CAS on deleteObject.

I don't know about the others though (ABS, GCS, Tigris, MinIO).

https://developers.cloudflare.com/r2/api/s3/extensions/#cond...

[0] https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObje...

[1] https://developers.cloudflare.com/r2/api/s3/extensions/#cond...

chucke1992
How is S3 right now in comparison to the Azure Storage Account?
nathants
s3 is not perfect, but close enough.

the ideal design is to use s3 for immutable objects, then build a similar system on ec2 nvme for mutable/ephemeral data.

the one i use by default is s4[1].

1. https://github.com/nathants/s4

Lucasoato
One feature that I definitely miss is the lack of a strongly consistent putIfAbsent API. A lot of big data table formats like Delta.io would benefit so much from it, right now you need to work around it by connecting to DynamoDB :/
someguy4242
Can anybody clarify how the author proposed two phase commit/write with DDB and S3?
miniman1337
S3 Object Lambda solves a lot of these "problems" with this design
jonstewart
When I have a problem figuring something out with S3, I slide into my shared slack channel with my AWS account rep and solutions engineers and ask them for help, and they help. Is this a feature other cloud storage providers have?

(But this rarely happens with S3, because it's so simple.)

robertclaus
It feels like the S3 Team has managed to avoid chasing the marginal user with features. A great example of restraint. It does beg the question of whether the system could be improved in other areas like optimizations to reduce price.
akira2501
> By embracing DynamoDB as your metadata layer, systems stand to gain a lot.

I just implemented a "posix-like" filesystem on top of it. Which means large object offload to s3 is not a problem or even an "ugly abstraction." In fact it looks quite natural once you get down to the layer that does this.

You also get something like regular file locks which extend to s3 objects, and if you're using cognito, you can simplify your permissions management and run it all through the file system as well.

AtlasBarfed
The biggest problem is the cost.

Actual storage cost is "meh" but...

My god the absolute highway robbery of the bandwidth costs.

brcmthrowaway
How is s3 implemented?

Is there a data center with many solid state disks?

Are tape drives used?

esteer
That is a very cool name for a blog.
at_a_remove
I suppose that means I have to learn it now. I'm about half-joking: I've deliberately sought to limit myself to tech further on in the hype cycle.