ArlowGPT-3B-Multilingual
by
yuchenxie
Language Model
OTHER
3B params
Quick Info
Released
11/3/2024Framework
OTHERResources
Training Data Analysis
🟡 Average (4.7/10)
Researched training datasets used by ArlowGPT-3B-Multilingual with quality assessment
Specialized For
general
science
multilingual
Training Datasets (3)
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
webtext
🔵 6.5/10
general
Key Strengths
- •Quality Signal: Human curation through Reddit upvotes
- •Effective: Produced high-performing GPT-2 model
- •Influential: Established importance of careful dataset curation
Considerations
- •Proprietary: Original dataset not publicly available
- •Limited Size: 40GB relatively small by modern standards
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
Explore our comprehensive training dataset analysis
View All Datasets