Close
AI & Tech

Skywork AI Advances Multimodal Reasoning: Introducing Skywork R1V2 with Hybrid Reinforcement Learning

Skywork AI Advances Multimodal Reasoning: Introducing Skywork R1V2 with Hybrid Reinforcement Learning


Recent advancements in multimodal AI have highlighted a persistent challenge: achieving strong specialized reasoning capabilities while preserving generalization across diverse tasks. “Slow-thinking” models such as OpenAI-o1 and Gemini-Thinking have made strides in deliberate analytical reasoning but often exhibit compromised performance on general visual understanding tasks, with increased tendencies toward visual hallucinations. As the field progresses toward building general-purpose AI systems, reconciling this tradeoff remains a critical research problem.

Skywork AI Introduces Skywork R1V2

Skywork AI has released Skywork R1V2, a next-generation multimodal reasoning model designed to address the reasoning-generalization tradeoff systematically. Building upon the foundation of Skywork R1V, R1V2 introduces a hybrid reinforcement learning framework, combining reward-model guidance with structured rule-based signals. The model bypasses the conventional reliance on teacher-student distillation by learning directly from multimodal interactions, offering an open and reproducible advancement through its release on Hugging Face.

Technical Approach and Innovations

Skywork R1V2 incorporates Group Relative Policy Optimization (GRPO) alongside a Selective Sample Buffer (SSB) to enhance training stability and efficiency. GRPO enables relative evaluation among candidate responses within the same query group, but convergence issues can diminish effective learning signals. The SSB mechanism addresses this by maintaining a cache of informative samples, ensuring continuous access to high-value gradients.

Additionally, the model adopts a Mixed Preference Optimization (MPO) strategy, integrating reward-model-based preferences with rule-based constraints. This hybrid optimization allows Skywork R1V2 to strengthen step-by-step reasoning quality while maintaining consistency in general perception tasks. A modular training approach, utilizing lightweight adapters between a frozen Intern ViT-6B vision encoder and a pretrained language model, preserves the language model’s reasoning capabilities while optimizing cross-modal alignment efficiently.

Empirical Results and Analysis

Skywork R1V2 demonstrates robust performance across a range of reasoning and multimodal benchmarks. On text reasoning tasks, the model achieves 78.9% on AIME2024, 63.6% on LiveCodeBench, 73.2% on LiveBench, 82.9% on IFEVAL, and 66.3% on BFCL. These results represent significant improvements over Skywork R1V1 and are competitive with substantially larger models, such as Deepseek R1 (671B parameters).

In multimodal evaluation, R1V2 achieves 73.6% on MMMU, 74.0% on MathVista, 62.6% on OlympiadBench, 49.0% on MathVision, and 52.0% on MMMU-Pro. The model consistently outperforms open-source baselines of comparable or larger size, including Qwen2.5-VL-72B and QvQ-Preview-72B, particularly excelling in tasks that require structured problem-solving across visual and textual inputs.

When compared against proprietary models, R1V2 demonstrates narrowing performance gaps. It surpasses Claude 3.5 Sonnet and Gemini 2 Flash on critical multimodal benchmarks such as MMMU and MathVista. Importantly, hallucination rates were substantially reduced to 8.7% through calibrated reinforcement strategies, maintaining factual integrity alongside complex reasoning.

Qualitative assessments further illustrate R1V2’s systematic problem-solving approach, with the model demonstrating methodical decomposition and verification behaviors in complex scientific and mathematical tasks, reinforcing its alignment with reflective cognitive patterns.

Conclusion

Skywork R1V2 advances the state of multimodal reasoning through a carefully designed hybrid reinforcement learning framework. By addressing the vanishing advantages problem with the Selective Sample Buffer and balancing optimization signals through Mixed Preference Optimization, the model achieves notable improvements in both specialized reasoning tasks and general multimodal understanding.

With benchmark-leading performances such as 62.6% on OlympiadBench and 73.6% on MMMU, Skywork R1V2 establishes a strong open-source baseline. Its design principles and training methodology offer a pragmatic approach toward developing robust, efficient multimodal AI systems. Future directions for Skywork AI include enhancing general visual understanding capabilities while preserving the sophisticated reasoning foundations laid by R1V2.


Check out the Paper and Model on HuggingFace. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop


Sana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.



Source link