3 Comments
User's avatar
Harshit Srivastava's avatar

Loved it. Worth reading. Last weekend I started learning about RAG and LLMs. Found out about Attention mechanism used by Transformers. Your article gave me great insights, but this is pt. 4 I will definitely read 1-3 also. One thing I am confused about is - LLM are predictors and hallhcinate if not tuned to answer about my dataset. RAG is being used to provide context as a prompt to LLMs. So this means we would not need RAGs if we fine tune a model? or in other way, If we don't want to train models we use RAGs?

Expand full comment
Jose Parreño Garcia's avatar

Hey Harshit, thanks for reading.

My answer:

- fine tuning is all about showing the model how to be helpful. For example, we can show it a lot of examples using a prompt technique and show ir the best way of responding. It will learn how good looks like.

- then RAG is about you fine tuning that helpfulness. The fine tuning guarantees a certain level of helpfulness but if your solutions requires to respond things in a very specific way or using contextual knowledge that only you have, then you pass that super tuned model thorough your RAG system.

Expand full comment
Harshit Srivastava's avatar

So that means both are necessary. Thanks for explaining me in such a detailed way. 🙂‍↕️

Expand full comment