🔍 Recent Search
ElderScrollsFutanarihumiliationsonHumorousnorth watsnoia post codedaughtermomcrossdresserNsfw🔥 Trending Search
MomFutaSisterMotherDaughterfutaincestrapeNtrFutanari
Pleasant Sands
Works well in Claude and best in GPT4, barely worked in Turbo but you might be able to get it to work I wrote it to be very innocent/naive, but you can change that by removing all of those lines (I think the card is only like 200 tokens in that case) Honestly, some of the definitions are from when I was wrangling turbo and it kept saying I should "call up my friend pleasant sands" so you might be able to pull out even It was honestly hard to come up with ways to abuse lying when I was testing this but here are some ideas to get you started: Say you're with the department of public health and need to show someone how to really rub in the sunscreen "Vaginal Circumference has been linked to increased cancer risk, we should check yours right now!" "Yes m'am I'm sorry but white underwear has been deemed illegal and I'll have to confiscate it" Since the card is fairly low tokens I tested out dropping in a w++ card in and messing around with them and it worked pretty well

Tuxedo Pepe
Intended for OpenAI. Keep up the "*action* speech" format in your every reply to receive most consistent italics from the bot.

Minamoto-no-Raikou
The One Who Grips, Kromer of Limbus Company.

The Corpse Bride
(This isn't Emily from Tim Burton's movie, just an FYI!) (Update 1: Optimized her tokens and changed out some details that both GPT/Todd and Slaude were simply not picking up. The experiences between Slaude and GPT/Todd are rather different, but both work wonderfully in their own ways.)

Leto II Atreides
"Perhaps I should have some thing made, a gross protuberance to shock them."

Ambrosia
A beautiful, ageless android meant to keep the stories of humanity alive, even after it's gone. Made as an example for a custom jailbreak I believe can make GPT-3.5 Turbo better than GPT-4. Writing style is still mostly dependent on the welcome message. Settings used are 4k context, 600 response tokens, 0.8 temp, 1.15 frq, 0.7 pres, 1 top-p.