Here's another thread about the same issue.
\nIt doesn't seem like anyone is actually hitting barriers with creating new repos or uploading currently. I do find it pretty sus to even show me the storage quota, assuming legitimate storage will always be granted anyway... 🤔
\nI'd appreciate a more official announcement on HF though, I've anecdotally seen a few notable creators panic-deleting repositories
\n\n
VB's reddit post
\nHeya! I’m VB, I lead the advocacy and on-device team @ HF. This is just a UI update for limits which have been around for a long while. HF has been and always will be liberal at giving out storage + GPU grants (this is already the case - this update just brings more visibility).
\nWe’re working on updating the UI to make it more clear and recognisable - grants are made for use-cases where the community utilises your model checkpoints and benefits for them - Quantising models is one such use-case, other use-cases are pre-training/ fine-tuning datasets, Model merges and more.
\nSimilarly we also give storage grants to multi-PB datasets like YODAS, Common Voice, FineWeb and the likes.
\nThis update is more for people who dump random stuff across model repos, or use model/ dataset repos to spam users and abuse the HF storage and community.
\nI’m a fellow GGUF enjoyer, and a quant creator (see - https://huggingface.co/spaces/ggml-org/gguf-my-repo) - we will continue to add storage + GPU grants as we have in past.
\nCheers!
\n","updatedAt":"2024-12-03T23:33:09.820Z","author":{"_id":"64371b564aacf7bf786fb530","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/64371b564aacf7bf786fb530/0lZEdVu06bx11fy1uTjpt.jpeg","fullname":"Nymbo","name":"Nymbo","type":"user","isPro":true,"isHf":false,"isMod":false,"followerCount":255}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9015058875083923},"editors":["Nymbo"],"reactions":[{"reaction":"🤝","users":["nroggendorff","John6666","d0rj","maywell"],"count":4}],"isReport":false}},{"id":"674fd4f47cdc8584b626b952","author":{"_id":"6640bbd0220cfa8cbfdce080","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6640bbd0220cfa8cbfdce080/wiAHUu5ewawyipNs0YFBR.png","fullname":"John Smith","name":"John6666","type":"user","isPro":true,"isHf":false,"isMod":false,"followerCount":431},"createdAt":"2024-12-04T04:05:08.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"I thought it was GB, but it was TB.","html":"I thought it was GB, but it was TB.
\n","updatedAt":"2024-12-04T04:05:08.146Z","author":{"_id":"6640bbd0220cfa8cbfdce080","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6640bbd0220cfa8cbfdce080/wiAHUu5ewawyipNs0YFBR.png","fullname":"John Smith","name":"John6666","type":"user","isPro":true,"isHf":false,"isMod":false,"followerCount":431}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.9999514818191528},"editors":["John6666"],"reactions":[{"reaction":"👍","users":["nroggendorff","nyuuzyou"],"count":2}],"isReport":false}},{"id":"6751212a70a6b5e653d8e27f","author":{"_id":"6640bbd0220cfa8cbfdce080","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6640bbd0220cfa8cbfdce080/wiAHUu5ewawyipNs0YFBR.png","fullname":"John Smith","name":"John6666","type":"user","isPro":true,"isHf":false,"isMod":false,"followerCount":431},"createdAt":"2024-12-05T03:42:34.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Update:\nhttps://huggingface.co/posts/nroggendorff/302035312925507","html":"Update:
https://huggingface.co/posts/nroggendorff/302035312925507
Join the conversation
Join the community of Machine Learners and AI enthusiasts.
Sign UpThey know that having full models and stuff in the commit history is taking it up.. right?
I have no clue. I guess I'm about to find out..
Adding context
@reach-vb made some clarifications in this post in r/LocalLLaMA
Here's another thread about the same issue.
It doesn't seem like anyone is actually hitting barriers with creating new repos or uploading currently. I do find it pretty sus to even show me the storage quota, assuming legitimate storage will always be granted anyway... 🤔
I'd appreciate a more official announcement on HF though, I've anecdotally seen a few notable creators panic-deleting repositories
VB's reddit post
Heya! I’m VB, I lead the advocacy and on-device team @ HF. This is just a UI update for limits which have been around for a long while. HF has been and always will be liberal at giving out storage + GPU grants (this is already the case - this update just brings more visibility).
We’re working on updating the UI to make it more clear and recognisable - grants are made for use-cases where the community utilises your model checkpoints and benefits for them - Quantising models is one such use-case, other use-cases are pre-training/ fine-tuning datasets, Model merges and more.
Similarly we also give storage grants to multi-PB datasets like YODAS, Common Voice, FineWeb and the likes.
This update is more for people who dump random stuff across model repos, or use model/ dataset repos to spam users and abuse the HF storage and community.
I’m a fellow GGUF enjoyer, and a quant creator (see - https://huggingface.co/spaces/ggml-org/gguf-my-repo) - we will continue to add storage + GPU grants as we have in past.
Cheers!
I thought it was GB, but it was TB.