Computer and Information SciencesMedium

Stories by Amir Aryani on Medium

Stories by Amir Aryani on Medium
Home PageRSS Feed
language
Published

Integrating Large Language Models (LLMs) such as GPT into organizations’ data workflows is a complex process with various challenges. These obstacles include but are not limited to technical, operational, ethical, and legal dimensions, each presenting hurdles that organisations must navigate to harness the full potential of LLMs effectively.

Published

Authors: Hui Yin, Amir Aryani As we discussed in our previous article “A Brief Introduction to Retrieval Augmented Generation (RAG)”, RAG is an artificial intelligence framework that incorporates the latest reliable external knowledge and aims to improve the quality of responses generated by pre-trained language models (PLM). Initially, it was designed to improve the performance of knowledge-intensive NLP tasks (Lewis et al., 2020). As

Published

Authors: Hui Yin , Amir Aryani With the increasing application of large language models in various scenarios, people realize that these models are not omnipotent. When generating dialogues (Shuster et al., 2021), the models often produce hallucinations, leading to inaccurate answers.

Published

There is a plethora of articles and predictions about the future development of AI. Reading through them suggests three main areas of high-impact changes that would transform our daily lives: Healthcare, Automation, and Manufacturing. The impact of AI in these industries is profound and multifaceted, with each sector experiencing unique transformations. Some of these ideas are described in this NVIDIA article by Cliff Edwards.

Published

I asked Generative AI Models about their context window. Their response was intriguing. The context window for a large language model (LLM) like OpenAI’s GPT refers to the maximum amount of text the model can consider at any one time when generating a response. This includes both the prompt provided by the user and the model’s generated text.

Published

Prompt engineering is a technique used to effectively communicate with large language models (LLM) like GPT-3 or GPT-4 and get the desired output. Here there are applications of 8 distinct prompt techniques to interact with Mistral AI and find out how we can prompt effectively to learn about Foundation Models. Zero-Shot Learning : This involves giving the AI a task without any prior examples.

Published

Procrastination, a common adversary in our daily lives, often leads to a backlog of tasks and ambitions. It’s a silent thief, stealthily stealing opportunities and time. You might have a stack of books on your bookshelf, longing to be read. Perhaps there’s a new language you’ve wanted to learn, or piles of paperwork cluttering your office that you’ve been meaning to organize.

Published

At the time of writing this post (November 19, 2023), there are 401,014 models and transformers listed on Huggingface.co. This number is a testament to the success and impact of the open-source model in the development of AI technologies. However, the challenge at hand is understanding the available models and keeping up to date with the latest developments.