Robust LLM output
Hi everyone,
I am working on a project using LLaMA 3:8B or LLaMA 3.1:B with Ollama.
The issue I'm facing is ensuring the output is robust and follows the exact format specified.
For example, when I provide a prompt that should force the following output:
```json
{ "name": "Ali", "age": 20, "major": "CS" }
```
The actual output sometimes looks like this due to variations in the input:
```json
{ "name": "Ali", "city": "xxx" }
```
as shown it not return certain expected fields (if not exist its not return '' )
and it add field not asked in prompt output format.
any ideas
thanks
6
4 comments
Motaseam Yousef
3
Robust LLM output
Data Alchemy
skool.com/data-alchemy
Your Community to Master the Fundamentals of Working with Data and AI — by Datalumina®
Leaderboard (30-day)
powered by