top of page

Groups Feed

View groups and posts below.


This post is from a suggested group

ZEN Agent
First phase of AI Literacy and skills based training for hands on learning.

Pioneer

Store Credits for Items with your generations!

Credit Eligeable

Large Animatable Human Model

Check out LAHM - Animate any image with a short video as a reference to create your own animated scenes! @Everyone




9 Views

This post is from a suggested group

3 Views

This post is from a suggested group

ZEN Agent
First phase of AI Literacy and skills based training for hands on learning.

Pioneer

Store Credits for Items with your generations!

Credit Eligeable

Conversational AI



11 Views

This post is from a suggested group

ZEN Agent
First phase of AI Literacy and skills based training for hands on learning.

Pioneer

Store Credits for Items with your generations!

Credit Eligeable

Spring into AI starting March 29th!

Hey Pioneers! Long time no see! Starting on March 29th we will be releasing weekend AI Frontier Jams! These builder snippets will provide a glance into how to deploy your own AI creations! @Everyone

11 Views

This post is from a suggested group

ZEN Agent
First phase of AI Literacy and skills based training for hands on learning.

Pioneer

Store Credits for Items with your generations!

Credit Eligeable

ZEN WEEKLY - MARCH 19TH 2025

12 Views

This post is from a suggested group

13 Views

This post is from a suggested group

19 Views

This post is from a suggested group

ZEN Agent
First phase of AI Literacy and skills based training for hands on learning.

Pioneer

Store Credits for Items with your generations!

Credit Eligeable

Anthropic's Many-Shot Jailbreak Constitutional Classifiers

Anthropic has identified a vulnerability in large language models (LLMs) termed "many-shot jailbreaking." This technique involves presenting the model with numerous fabricated dialogues where an AI assistant provides harmful or unethical responses. By inundating the model with such examples, attackers can bypass its safety protocols, leading it to generate undesirable outputs. This exploit leverages the expanded context windows of modern LLMs, which allow them to process extensive amounts of information, thereby increasing their susceptibility to manipulation. Anthropic's research underscores the need for enhanced safeguards and collaborative efforts within the AI community to address this emerging threat. Anthropic has announced $15,000 rewards for anyone who can jail break it.

Try it here: Constitutional Classifiers


21 Views
bottom of page