FEATUREDTechnology

Multiverse launches compressed OpenAI language model designed to cut memory needs and lower AI infrastructure costs



Spanish AI company Multiverse Computing has released HyperNova 60B 2602, a compressed version of OpenAI’s gpt-oss-120B, and published it for free on Hugging Face.

The new version cuts the original model’s memory needs from 61GB to 32GB, and Multiverse says it retains near-parity tool-calling performance despite the 50% reduction in size.

In theory, this means a model that once required heavy infrastructure can run on far less hardware. For developers with tighter budgets or energy constraints, that’s a potentially huge advantage.

Multiverse Computing HyperNova 60B 2602 performance

(Image credit: Multiverse Computing)





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *