Original title: Latam-GPT: The Free, Open Source, and Collaborative AI of Latin America
The supercomputing infrastructure at the University of Tarapacá (UTA) in Arica, Chile, is a fundamental pillar for Latam-GPT. With a projected investment of $10 million, the new center has a cluster of 12 nodes, each equipped with eight state-of-the-art NVIDIA H200 GPUs. This capacity, unprecedented in Chile and the region more broadly, not only enables large-scale model training in the country for the first time, it also encourages decentralization and energy efficiency. The first version of Latam-GPT will be launched this year. The model will be refined and expanded as new strategic partners join the effort and more robust data sets are integrated into it. The interview was edited for length and clarity. WIRED: Tech giants such as Google, OpenAI, and Anthropic have invested billions in their models. What is the technical and strategic argument for the development of a separate model specifically for Latin America? Álvaro Soto: Regardless of how powerful these other models may be, the