Home Artists Posts Import Register

Content


Existing Users - Get Your New Password, That's All You Need!

New vamX Chat AI Users - Download and Getting Started


New Voices for Chat AI

We now have a Verbatik voice subscription included. We have a large (but limited) monthly voice generation supply, that greatly expands the number of voices you can choose.

These Verbatik voices include MANY voices in non-English languages, so you can finally talk and get responses in almost any language. This is limited by what languages the Chat AI can handle, so try different Chat AIs if NSFW Ooba doesn't handle your language.

This is also a fun way to get cool accents. Just continue to speak in English, but set the voice to Spanish, Dutch, Italian, or whatever, to have them try to speak English using that voice, for a cool accent.

New Chat AIs

Ooba II has been replaced with a better model (this was done around a week ago).

Chat models (LLMs) are defined by their parameter size. Large models can have more complex responses as they have more data to generate their responses from. NSFW Ooba and Ooba II are 13B (13 billion parameter) models.

We've now added a 70B (70 billion parameter) model (you can select "Ooba 70B" from the NSFW Ooba drop down). The 70B model is slower to generate responses, but may give better results (not always though).

If NSFW Ooba isn't giving you the conversation you want, try Ooba II, Ooba 70B, or any Kobold Horde model.

You can now also use chat models through Kobold Horde. This means additional possible responses / Chat AIs based on the Kobold cloud hosted AIs (anyone can host any model, and anyone else can use it). Kobold Horde is slow, and even though vamX has priority access to Kobold Horde, you should expect to wait longer for responses (at least 7-10 seconds before the text response) when using the Horde.

When choosing Kobold Horde, by default, it selects a Kobold Horde model that we are hosting. That one will generally be the fastest, but you can also select any Kobold Horde model available on the Horde (although we limit the models we display to those that would return a response within 30 seconds max).

Finally, if you want to host your own model, for lightning fast access to whatever LLM you want, you can now use your own hosted LLMs instead of NSFW Ooba. This isn't a local solution, your LLM still connects with our action generation, voice generation, and vamX connections, but this way if you want to have a fast, exclusive, 70B model of your choice, you can host this on RunPod. We try to make this easy, but this is still an advanced feature for those who want to learn about LLMs. Read the instructions here. If you are a super advanced user you can host somewhere else, but we will only support / help people get things running on RunPod.

Remember, if you don't want to type your password each time, update your password in the VaM/Saves/PluginData/vamX/vamX_chat_enter_your_password_here.txt file.

Comments

EmryX

I already tried it, amazing work ! Just a note tho, it seems each time we send a message, the focus of the field goes away. So we need to click on the input each time we want to put a message.

vamx

Thanks for the bug report, will fix this in the next day or less.

Anonymous

the future is NOW! amazing work

Cameron Nurcombe

is there anyway to close the massive user guide when you load chat?

vamx

Good point, I'll put a hide button in for 1.34. Meanwhile you can hide it manually. Virt-a-Mate menu edit mode => Pointer icon (shows the list of all scene atoms) => CustomUnityAsset_WebBrowser => Control tab => uncheck "on".

ESAD 41

Does the server issue still persist?

vamx

Yikes, it did happen again. This will become more stable soon. The addition of koboldai has caused some freezing issue which only seem to happen in live scenarios (we can't reproduce it on the dev server). We are working out various solutions. We already have a number of items in place to help with server freezes or crashes, but they don't apply to this issue yet. Sorry. And I think by Saturday this will be basically stable.

bigboss88

I want to use my custom cloned voice from elevenlabs to talk in her own language, but I can't select them if I want the AI to talk that language, only the Verbatik ones. So if I use that voice I just get a weird accent. Can that functionality be added? Or else a way to use custom Verbatik voices if you're subscribed to that?