Quentin Lhoest PRO

lhoestq

AI & ML interests

Maintainer of πŸ€—Datasets: NLP, Multimodal data processing and sharing

Recent Activity

updated a dataset about 11 hours ago
infinite-dataset-hub/ClientHistoryPurchase
updated a dataset about 12 hours ago
infinite-dataset-hub/InventoryAnomalies
updated a dataset about 12 hours ago
infinite-dataset-hub/DiabetesQA
View all activity

Articles

Organizations

Hugging Face's profile picture WMT: Workshop on Statistical Machine Translation's profile picture BigScience Workshop's profile picture Neuropark's profile picture Hugging Face Internal Testing Organization's profile picture Training Transformers Together's profile picture BigScience Catalogue Data's profile picture OpenSLR's profile picture BigScience Data's profile picture Evaluation on the Hub's profile picture Datasets Maintainers's profile picture 2023 Jan Offsite hackathon's profile picture Whisper Distillation's profile picture Open LLM Leaderboard's profile picture huggingPartyParis's profile picture CommonCanvas's profile picture ZeroGPU Explorers's profile picture Datasets examples's profile picture Pixel Parsing's profile picture HuggingFaceFW-Dev's profile picture Infinite Dataset Hub's profile picture Hugging Face FineVideo's profile picture Dataset ReWriter's profile picture Dataset Tools's profile picture

Posts 2

view post
Post
3963
Hey ! I'm working on a 100% synthetic Dataset Hub here (you can search for any kind of datasets an the app invents them). The link is here: infinite-dataset-hub/infinite-dataset-hub

Question for the Community:

Which models should I use to generate images and audio samples for those datasets ? πŸ€—
view post
Post
3024
✨ Easy Synthetic Dataset File Generation using LLM DataGen ! Link: https://huggingface.co/spaces/lhoestq/LLM_DataGen

features + how it works:

✍️ Generate the dataset content you want just by entering a file name
πŸ’‘ Optionally specify the column names you need
πŸ’¨ The dataset is streamed and generated on-the-fly in JSON Lines format
βœ… Generation is constrained to always output valid JSON

How does this work ?
1/ Enter a file name
2/ The model generates column names for such a file. Using structured generation, it can generate 2 to 5 column names using lower case characters and underscores. I use a prompt that asks to generate column names for a realistic dataset and low temperature.
3/ The columns are used to update the Finite State Machine for the dataset content structured generation, so that it is used to generate JSON objects using those columns
4/ The model generates JSON objects using structured generation again, using the updated Finite State Machine. I use a prompt that asks for realistic data and a temperature of 1.

> Why update a Finite State Machine instead of re-creating one ?

Creating one can take up to 30sec, while updating one takes 0.1s (though it requires to manipulate a graph which is not easy to implement)

> Batched generation is faster, why not use it ?

Generate in batches is faster but tends to generate duplicates for this demo.
Further work can be to provide different prompts (one per sequence in the batch) to end up with a different distribution of sequences in each batch. Or implement a custom sampler that would forbid generating the same data in sequences of the same batch.

> How does structured generation work ?

I used the outlines library with transformers to to define a JSON schema that the generation has to follow. It uses a Finite State Machine with token_id as transitions.

Let me know what you think ! And feel free to duplicate/modify it to try other models/prompts or sampling methods :)