Discussion about this post

User's avatar
Harshit Srivastava's avatar

Loved it. Worth reading. Last weekend I started learning about RAG and LLMs. Found out about Attention mechanism used by Transformers. Your article gave me great insights, but this is pt. 4 I will definitely read 1-3 also. One thing I am confused about is - LLM are predictors and hallhcinate if not tuned to answer about my dataset. RAG is being used to provide context as a prompt to LLMs. So this means we would not need RAGs if we fine tune a model? or in other way, If we don't want to train models we use RAGs?

Expand full comment
2 more comments...

No posts