Image credit: statsig Fine tuning the behavior of an AI model is not just about writing the right prompt-It is also about adjusting the temperature behind the scenes. In LLMs temperature controls how creative or focused the model’s output will be. A lower temperature such as 0.0-0.2 keeps the response consistent, fact based and deterministic, […]
Image Credit: Encord Fine tuning parameters in AI models is more than adjusting dials- it’s about aligning the model’s behavior with real world need. When dealing large and dynamic datasets, especially in industries like retails, the right parameter tuning ensures that outputs are not accurate but context aware. Instead of relying on default settings, adjusting […]
I personally used the Groq API to test and compare how two AI models perform with the same input. I compared two powerful models—LLaMA 3 70B Versatile and DeepSeek R1 Distill LLaMA 70B—by sending an identical JSON prompt requesting Selenium Java code for Salesforce login automation. My goal was to analyze and test how these models handle […]