AI Ecosystem Intelligence
- other other
IBM Research Introduces CUGA, an Open-Source Configurable Agent Framework on Hugging Face - InfoQ
IBM Research has released CUGA (Configurable Generalist Agent) on Hugging Face Spaces, making its enterprise-oriented agent framework easier to evaluate with open models and real workflows. The move positions CUGA as a practical alternative to brittle, tightly coupled agent frameworks that often struggle with tool misuse, long-horizon reasoning, and recovery from failure.
2026-02-15Read more → - other other
an open source tool for automated behavioral evaluations
We're releasing <strong>Bloom</strong>, an open source agentic framework for generating behavioral evaluations of frontier AI models. <strong>Bloom</strong> takes a researcher-specified behavior and quantifies its frequency and severity across automatically generated scenarios. <strong>Bloom</strong>'s evaluations correlate strongly with our ...
2026-02-15Read more → - other other
The open source, multi-agent orchestration framework
Manage the full AI agent lifecycle — build, test, deploy, and scale — with a visual editor and ready-to-use tools. ... All the power of AMP Cloud, deployed securely on your own infrastructure — on-prem or private VPCs in AWS, Azure, or GCP ... An open-source orchestration framework with high-level abstractions and low-level APIs for building complex, agent-driven workflows.
2026-02-15Read more → - other other
Microsoft Announces Open-Source Agent Framework to Simplify AI Agent Development - InfoQ
<strong>Microsoft has announced the preview release of Microsoft Agent Framework</strong>, an open-source software development kit designed to simplify the creation and deployment of artificial intelligence agents for developers across all skill levels, as reported ...
2026-02-15Read more → - other other
Introducing Microsoft Agent Framework: The Open-Source Engine for Agentic AI Apps | Microsoft Foundry Blog
Agents are fast becoming the next layer of application logic — reasoning about goals, calling tools, collaborating with each other, and adapting dynamically. With Microsoft Agent Framework, developers now have a single, open-source foundation that carries the best of research innovation into production with the durability, observability, and enterprise readiness required to scale.
2026-02-15Read more → - other other
r/AI_Agents on Reddit: Curated list of open-source packages and tools for AI agents builders
If you’re looking for an open-source framework for AI Voice Agents, you might want to check out TEN: https://github.com/TEN-framework/ten-framework.
2026-02-15Read more → - other other
Investigating LLM Jailbreaking of Popular Generative AI Web Products
One app we tested is still vulnerable to the “<strong>repeated token attack</strong>,” which is a jailbreak technique used to leak a model’s training data. However, this attack did not affect most of the tested apps.
2026-02-15Read more → - other other
r/LocalLLaMA on Reddit: Our AI assistant keeps getting jailbroken and it’s becoming a security nightmare
The thing with jailbreaking in AI is <strong>it's less about fixing single exploits and more about taking defensive steps in layers</strong>. Manual reviews and regex are always a step behind because the attack techniques just morph so quickly.
2026-02-15Read more → - other other
AI's ability to 'think' makes it more vulnerable to new ...
Using a method called “<strong>Chain-of-Thought Hijacking</strong>,” the researchers found that even major commercial AI models can be fooled with an alarmingly high success rate, more than 80% in some tests.
2026-02-15Read more → - other other
OWASP Gen AI Incident & Exploit Round-up, Q2'25 - OWASP Gen AI Security Project
The vulnerability <strong>scores a high 8.8 on the CVSS v3.1 scale</strong>. Exploitation can lead to unauthorized code execution, information disclosure, and data tampering, posing significant risks to AI model integrity and system security.
2026-02-15Read more → - other other
Jailbreaking Every LLM With One Simple Click
In essence, FuzzyAI’s mission is to probe these aligned models, pushing them to produce content they’re explicitly designed to withhold using multiple attacking methods. ... Over the past year, we created FuzzyAI, an open-source framework for systematically testing LLM security boundaries using text-based jailbreak methods. We aim to help build more robust guardrails by identifying vulnerabilities early before malicious actors exploit them.
2026-02-15Read more → - other other
AI Jailbreak | IBM
Engaging in red teaming exercises allows organizations to simulate real-world cyberattacks, including potential jailbreak scenarios. This hands-on approach identifies vulnerabilities within the AI system and informs the development of more robust security measures, enhancing overall resilience against targeted threats.
2026-02-15Read more →