Okay, so I'm working on a new web project that I can't really discuss yet. But here's the gist:
A user can upload a file. This file lands in an AmazonS3 Bucket. Buckets are just like folders. Let me propose a scenario:
User 1 uploads a file called "a.doc"
User 2 uploads a file called "a.doc"
In this instance, User2's file will overwrite User 1's file. The obvious solution is to give these files unique names before we upload them to the S3 server. Because of the way "Buckets" work, we can't create sub-buckets. You can give files names like "/users1/a.doc" and fake it, but it can cause headaches down the line. KISS, yes? So: Unique Names.
The other thing that;s important here is that every file that gets uploaded gets a record written to a local SQL DB- It's just the S3 URL, a primary key ID, the ID of the user that uploaded it and some odds and ends data. We'll use these records to sort and display the files later.
So, I was thinking that the best way to uniquely ID these files would be to have the schema "primaryfileid-filename.ext". But this requires that we write the record, then somehow get the ID back all before the file gets uploaded. guh. And it works on the supposition that the FIle ID will always be the largest ID + 1, which may not be true if you have multiple users concurrently using the system.
Idea 2 is to to just attach a Username AND a random 1-999# of the front like this "username-random-filename.ext". That one makes the most sense, I guess, but it also seems cumbersome.
Any other ideas? Am I overthinking this? Basically I need a way to keep every file as unique as possible from a naming perspective.