How to Create a Chat-bot Using Django DRF [2024]
Here, we are going to create a chatbot using the Openai model with the Django rest framework.
To implement this, you may need basic knowledge of Python and how the Django framework works.
To read more about setup pagination in DRF, refer to our blog How to Setup Pagination in DRF
Requirements:
- OpenAi-python3-openid==3.2.0
Step 1- Project setup
Create a Django project Chat_Bot and install the necessary package given in the requirements.
django-admin startproject Chat_Bot
Now, create an app called Chat.
Python manage.py startapp chat
Now add the app to the project settings and also the URL.
INSTALLED_APPS = [ ‘Chat’, ]
from django.urls import path, include
urlpatterns = [
path(chat/api/', include("chat.urls")),
]
The basic setup for the project is completed now we can implement the workflow for the chatbot.
Step 2- Create The workflow for the chatbot in Django
To create a chat-bot need to set a basic workflow for the chat app
1. Models
To interact with the chatbot with continuity, we need to store the chat and response.
from django.db import models
class User(AbstractUser):
first_name = models.CharField(max_length=250)
last_name = models.CharField(max_length=250)
email = models.CharField(max_length=250,unique=True)
class ChatHistory(models.Model):
user_id = models.ForeignKey(User,on_delete=models.CASCADE)
user_chat = models.TextField(null=True,blank=True)
response_chat = models.TextField(null=True,blank=True)
created_at = models.DateTimeField(auto_now_add=True)
The ChatHistory models have the user so that we can identify which user is chatting with the bot, and also, we can save the gpt model response. In addition to that, we are also adding a field to keep track of chat datetime.
2. Serializer
In Django DRF, we can’t respond with objects. The object must be converted to JSON to use them for response.
from rest_framework import serializers
from .models import *
class ChatHistorySerializer(serializers.ModelSerializer):
class Meta:
model = ChatHistory
fields = '__all__'
Here, I have connected the ChatHistory model to a serializer so that it can be used to convert the ChatHistory objects to JSON based on our needs.
3. Url
Url connects the API to the chatbot view.
from django.urls import include, path
from . import views
urlpatterns = [
path('chat-bot/', views.ChatBotView.as_view(),
name='')
]
4. Views
Views contain many functionalities to use the chatbot efficiently. Let me break down the views to understand them.
- Audio Conversion
This chatbot can be used to generate responses based on audio. As a prompt, we can use the microphone to ask questions to the chatbot.
When the end-user has an iOS device, the audio needs to be converted to MP3 or any other suitable format to work with the gpt model.
def audio_conversion(self, request):
audio_file = request.data.get('audio', None)
user = request.data.get('user_id',None)
chat_type = request.data.get('chat_type', None)
if chat_type =='audio' :
try:
audio_content = audio_file.read()
audio_segment = AudioSegment.from_file(
io.BytesIO(audio_content),
frame_rate=44100,
channels=2,
sample_width=2
)
modified_audio_content = audio_segment.export(
format="mp3").read()
file_name = audio_file._name
modified_audio_file = ContentFile(
modified_audio_content,
name=f'{file_name}.mp3')
data = request.data.copy()
data['audio'] = modified_audio_file
serializer = ChatHistorySerializer(data=data)
if serializer.is_valid():
chat_history = serializer.save()
modified_audio_content = None
return chat_history
except Exception as E:
return Response(str(E), status=400)
else:
return Response({"error": "No audio file found"},
status=400)
Using this function will help to convert the audio mp3 format, and it will save to the database.
- Audio-to-text Generation
Before generating the chatbot response, we need to convert the audio we received to text. Only after generating the text can we request the chat-gpt model to generate an appropriate response.
if chat_type =='audio':
audio_file = open(
chat_history.audio.path, "rb")
transcript = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file
)
text = transcript.text
chat_history.user_chat =text
chat_history.save()
By Changing the transcriptions to translation, the response will have translated text
transcript = client.audio.translation.create(
model="whisper-1",
file=audio_file
)
- Message History Setup
Arranging previous chat and response of the user.
def chat_text(self,chat_historys,starting_promt):
messages = [{"role": "system", "content": starting_promt}]
if chat_historys:
for chat_history in chat_historys:
if chat_history.user_chat:
messages.append(
{"role": "user", "content": f"user_chat =
{chat_history.user_chat}"})
if chat_history.response_chat:
messages.append(
{"role": "system", "content":
chat_history.response_chat})
return messages
- Generating Response from Chat-gpt
def transilated_data(self, client, messages):
try:
response = client.chat.completions.create(
model="gpt-4-1106-preview",
response_format={"type": "json_object"},
messages=messages
)
message = response.choices[0].message.content
data = json.loads(message)
translation = data.get("data", None)
return translation
except Exception as E:
return Response(str(E))
Views.py
Views.py code contains all the logic of chat-bot.
import json
import os
from rest_framework.views import APIView
from rest_framework.response import Response
from gpt.models import ChatHistory
from gpt.serializers import ChatHistorySerializer
from openai import OpenAI
from pydub import AudioSegment
import io
from django.core.files.base import ContentFile
from django.core.files.storage import default_storage
starting_promt = "TYPE YOUR PROJECT DETAILS BASED TO TRAIN GPT"
class ChatBotView(APIView):
def post(self, request, user_id=None):
chat_history = self.audio_conversion(request)
chat_type = request.data.get('chat_type', None)
user = request.data.get('user_id', None)
text = request.data.get('user_chat', None)
if chat_type != 'text' and isinstance(chat_history, Response):
return chat_history
client = OpenAI(
api_key=”YOUR_API_KEY”)
if chat_type =='text':
serializer = ChatHistorySerializer(data=request.data)
chat_history = self.serializer_valid_check(
serializer, data=request.data)
if isinstance(chat_history, Response):
return chat_history
if chat_type =='audio':
audio_file = open(
chat_history.audio.path, "rb")
transcript = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file
)
text = transcript.text
chat_history.user_chat =text
chat_history.save()
chat_historys = ChatHistory.objects.filter(user_id__id=user)
messages = self.chat_text(chat_historys,starting_promt)
translation = self.transilated_data(client, messages)
if isinstance(translation,Response):
return translation
chat_history.response_chat = translation
chat_history.save()
serializer = ChatHistorySerializer(instance=chat_history)
return Response(serializer.data,status=200)
def get(self, request, user_id):
chat_history = ChatHistory.objects.filter(user_id__id=user_id)
serializer = ChatHistorySerializer(chat_history, many=True)
return Response(serializer.data)
def delete(self,request,user_id):
ChatHistory.objects.filter(user_id__id=user_id).delete()
return Response({})
def chat_text(self,chat_historys,starting_promt):
messages = [{"role": "system", "content": starting_promt}]
if chat_historys:
for chat_history in chat_historys:
if chat_history.user_chat:
messages.append(
{"role": "user", "content": f"user_chat =
{chat_history.user_chat}"})
if chat_history.response_chat:
messages.append(
{"role": "system", "content":
chat_history.response_chat})
return messages
def serializer_valid_check(self, serializer, data):
if serializer.is_valid():
return serializer.save()
else:
return Response(serializer.errors, status=400)
def transilated_data(self, client, messages):
try:
response = client.chat.completions.create(
model="gpt-4-1106-preview",
response_format={"type": "json_object"},
messages=messages
)
message = response.choices[0].message.content
data = json.loads(message)
translation = data.get("data", None)
return translation
except Exception as E:
return Response(str(E))
def audio_conversion(self, request):
audio_file = request.data.get('audio', None)
user = request.data.get('user_id',None)
chat_type = request.data.get('chat_type', None)
if chat_type =='audio' :
try:
audio_content = audio_file.read()
audio_segment = AudioSegment.from_file(
io.BytesIO(audio_content),
frame_rate=44100,
channels=2,
sample_width=2
)
modified_audio_content = audio_segment.export(
format="mp3").read()
file_name = audio_file._name
modified_audio_file = ContentFile(
modified_audio_content, name=f'{file_name}.mp3')
data = request.data.copy()
data['audio'] = modified_audio_file
serializer = ChatHistorySerializer(data=data)
if serializer.is_valid():
chat_history = serializer.save()
modified_audio_content = None
return chat_history
except Exception as E:
return Response(str(E), status=400)
else:
return Response({"error": "No audio file found"}, status=400)
API params:
user_chat: SOME_STRING_CONTAINS_YOUR_QUESTION
user_id: USER_ID
chat_type: text
audio: THE_AUDIO_FILE
The chat_type must be text or audio based on the request.
To read more about using the UPS express API in Django DRF, refer to our blog How to Use the UPS Express API in Django DRF in 2024
Conclusion
In this blog, I have explained the chatbot creation in Django. I explained each functionality separately to understand the workflow, and you can implement it without getting errors.
