Vicuna AI favicon
Vicuna AI

Vicuna AI

Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality

Access

Vicuna AI Details

Product Information

Social Media

Product Description

<p>We introduce Vicuna-13B, an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. Preliminary evaluation ...

Vicuna AI Introduction

Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality

Overview

The rapid advancement of large language models (LLMs) has revolutionized chatbot systems, resulting in unprecedented levels of intelligence as seen in OpenAI's ChatGPT. However, despite its impressive performance, the training and architecture details of ChatGPT remain unclear, hindering research and open-source innovation in this field. Inspired by the Meta LLaMA and Stanford Alpaca project, we introduce Vicuna-13B, an open-source chatbot backed by an enhanced dataset and an easy-to-use, scalable infrastructure. By fine-tuning a LLaMA base model on user-shared conversations collected from ShareGPT.com, Vicuna-13B has demonstrated competitive performance compared to other open-source models like Stanford Alpaca. This blog post provides a preliminary evaluation of Vicuna-13B's performance and describes its training and serving infrastructure. We also invite the community to interact with our online demo to test the capabilities of this chatbot.

How Good is Vicuna?

After fine-tuning Vicuna with 70K user-shared ChatGPT conversations, we discover that Vicuna becomes capable of generating more detailed and well-structured answers compared to Alpaca (see examples below), with the quality on par with ChatGPT.

Online Demo

Try the Vicuna-13B demo here!

Training

Vicuna is created by fine-tuning a LLaMA base model using approximately 70K user-shared conversations gathered from ShareGPT.com with public APIs. To ensure data quality, we convert the HTML back to markdown and filter out some inappropriate or low-quality samples. Additionally, we divide lengthy conversations into smaller segments that fit the model's maximum context length.
  • Multi-turn conversations: We adjust the training loss to account for multi-turn conversations and compute the fine-tuning loss solely on the chatbot's output.
  • Memory Optimizations: To enable Vicuna's understanding of long context, we expand the max context length from 512 in alpaca to 2048, which substantially increases GPU memory requirements. We tackle the memory pressure by utilizing gradient checkpointing and flash attention.
  • Cost Reduction via Spot Instance: The 40x larger dataset and 4x sequence length for training poses a considerable challenge in training expenses. We employ SkyPilot managed spot to reduce the cost by leveraging the cheaper spot instances with auto-recovery for preemptions and auto zone switch. This solution slashes costs for training the 7B model from around 140 and the 13B model from around 300.

Serving

We build a serving system that is capable of serving multiple models with distributed workers. It supports flexible plug-in of GPU workers from both on-premise clusters and the cloud. By utilizing a fault-tolerant controller and managed spot feature in SkyPilot, this serving system can work well with cheaper spot instances from multiple clouds to reduce the serving costs. It is currently a lightweight implementation and we are working on integrating more of our latest research into it.

How To Evaluate a Chatbot?

Evaluating AI chatbots is a challenging task, as it requires examining language understanding, reasoning, and context awareness. With AI chatbots becoming more advanced, current open benchmarks may no longer suffice. For instance, the evaluation dataset used in Stanford’s Alpaca, self-instruct, can be effectively answered by SOTA chatbots, making it difficult for humans to discern differences in performance. More limitations include training/test data contamination and the potentially high cost of creating new benchmarks. To tackle these issues, we propose an evaluation framework based on GPT-4 to automate chatbot performance assessment.

Limitations

We have noticed that, similar to other large language models, Vicuna has certain limitations. For instance, it is not good at tasks involving reasoning or mathematics, and it may have limitations in accurately identifying itself or ensuring the factual accuracy of its outputs. Additionally, it has not been sufficiently optimized to guarantee safety or mitigate potential toxicity or bias. To address the safety concerns, we use the OpenAI moderation API to filter out inappropriate user inputs in our online demo. Nonetheless, we anticipate that Vicuna can serve as an open starting point for future research to tackle these limitations.

Release

In our first release, we will share the training, serving, and evaluation code on a GitHub repo: https://github.com/lm-sys/FastChat. We also released the Vicuna-13B model weights. There is no plan to release the dataset. Join our Discord server and follow our Twitter to get the latest updates.

How Good is Vicuna?

After fine-tuning Vicuna with 70K user-shared ChatGPT conversations, we discover that Vicuna becomes capable of generating more detailed and well-structured answers compared to Alpaca (see examples below), with the quality on par with ChatGPT.

Online Demo

Try the Vicuna-13B demo here!

Overview

The rapid advancement of large language models (LLMs) has revolutionized chatbot systems, resulting in unprecedented levels of intelligence as seen in OpenAI's ChatGPT. However, despite its impressive performance, the training and architecture details of ChatGPT remain unclear, hindering research and open-source innovation in this field. Inspired by the Meta LLaMA and Stanford Alpaca project, we introduce Vicuna-13B, an open-source chatbot backed by an enhanced dataset and an easy-to-use, scalable infrastructure. By fine-tuning a LLaMA base model on user-shared conversations collected from ShareGPT.com, Vicuna-13B has demonstrated competitive performance compared to other open-source models like Stanford Alpaca. This blog post provides a preliminary evaluation of Vicuna-13B's performance and describes its training and serving infrastructure. We also invite the community to interact with our online demo to test the capabilities of this chatbot.

Training

Vicuna is created by fine-tuning a LLaMA base model using approximately 70K user-shared conversations gathered from ShareGPT.com with public APIs. To ensure data quality, we convert the HTML back to markdown and filter out some inappropriate or low-quality samples. Additionally, we divide lengthy conversations into smaller segments that fit the model's maximum context length.

Serving

We build a serving system that is capable of serving multiple models with distributed workers. It supports flexible plug-in of GPU workers from both on-premise clusters and the cloud. By utilizing a fault-tolerant controller and managed spot feature in SkyPilot, this serving system can work well with cheaper spot instances from multiple clouds to reduce the serving costs. It is currently a lightweight implementation and we are working on integrating more of our latest research into it.

How To Evaluate a Chatbot?

Evaluating AI chatbots is a challenging task, as it requires examining language understanding, reasoning, and context awareness. With AI chatbots becoming more advanced, current open benchmarks may no longer suffice. For instance, the evaluation dataset used in Stanford’s Alpaca, self-instruct, can be effectively answered by SOTA chatbots, making it difficult for humans to discern differences in performance. More limitations include training/test data contamination and the potentially high cost of creating new benchmarks. To tackle these issues, we propose an evaluation framework based on GPT-4 to automate chatbot performance assessment.

Limitations

We have noticed that, similar to other large language models, Vicuna has certain limitations. For instance, it is not good at tasks involving reasoning or mathematics, and it may have limitations in accurately identifying itself or ensuring the factual accuracy of its outputs. Additionally, it has not been sufficiently optimized to guarantee safety or mitigate potential toxicity or bias. To address the safety concerns, we use the OpenAI moderation API to filter out inappropriate user inputs in our online demo. Nonetheless, we anticipate that Vicuna can serve as an open starting point for future research to tackle these limitations.

Show More

Vicuna AI FAQ

Preliminary evaluations using GPT-4 as a judge indicate that Vicuna AI achieves more than 90% of the quality of ChatGPT and Google Bard. This means that Vicuna AI can provide responses that are just as helpful, relevant, accurate, and detailed as ChatGPT and Bard in most cases.

Vicuna AI was trained by fine-tuning a LLaMA base model on a dataset of 70,000 user-shared conversations collected from ShareGPT. These conversations were converted to markdown and filtered for quality before training.

Vicuna AI, like other large language models, has certain limitations. These include:
  • Difficulty with reasoning and mathematics
  • Potentially inaccurate factual accuracy
  • Limited safety guarantees and possible toxicity or bias

The developers are working to address these limitations through ongoing future research.

Vicuna AI Website Traffic

Visits

DateVisits
2024-06-012207473
2024-07-012143625
2024-08-012099531

Metric

MetricValue
Bounce Rate59.33%
Pages Per Visit1.99
Average Visit Duration177.02 s

Geography

CountryShare
🇨🇳

People's Republic of China

14.42%

🇺🇸

United States of America

14.22%

🇷🇺

Russian Federation

12.08%

🇻🇳

Vietnam

5.55%

🇩🇪

Germany

5.30%

Source

SourceValue
Direct Access55.07%
Search33.37%
Referrals8.26%
Social Media3.06%
Paid Referrals0.16%
Email0.07%

Vicuna AI Alternative Products

View details for 绘AI
https://www.ai016.com

绘AI(opens in a new tab)

Image Generation

Create stunning images with AI technology. Our platform allows you to generate unique imagery from text prompts, making it ideal for designers, artists, and content creators. Try it now!

1.9KVisits
42%Search
View details for Leonardo AI
https://leonardo.ai

Leonardo AI(opens in a new tab)

Image Generation

Transform your projects with our AI image generator. Generate high-quality, AI generated images with unparalleled speed and style to elevate your creative vision

14.2MVisits
33%Search
View details for AI Art
https://aiart.chuangkit.com

AI Art(opens in a new tab)

Image Generation

创客贴智能设计在线协作平台,是一款平面设计工具和在线平面设计软件,提供海量海报模板,新媒体配图,电商模板,主图模板,邀请函,公告通知,喜报,logo等免费设计素材和模板,创客贴AI工具箱提供在线智能生成海报,一键抠图,一键消除,一键去水印,图片高清修复,无损放大,智能拼图等众多智能AI工具。

90.9KVisits
7%Search
View details for AIDesign
https://aidesign.click

AIDesign(opens in a new tab)

Text-to-Image Conversion

Generate AI images from text descriptions, and more text to image

View details for MagicShot.ai
https://magicshot.ai

MagicShot.ai(opens in a new tab)

Image Generation

Transform your ideas into stunning AI art with MagicShot.ai. Create images instantly using our AI photo generator. Unleash your creativity!

33.9KVisits
38%Search
View details for Stockimg AI
https://stockimg.ai/

Stockimg AI(opens in a new tab)

Image Generation

Stockimg is an all in one design and content creation tool powered by AI. You can easily generate logo, illustration, wallpaper, poster and more.

265.5KVisits
48%Search