Rogue Scholar Posts

language
Published in Stories by Research Graph on Medium
Author Wenyi Pi

Enhancing Open-Domain Conversational Question Answering with Knowledge-Enhanced Models and Knowledge Graphs How knowledge-enhanced language models and knowledge graphs are advancing open-domain conversational question answering Author: Wenyi Pi (ORCID: 0009-0002-2884-2771 ) When searching for information on the website, it is common to come across a flood of

Published in Stories by Research Graph on Medium

Efficient creation of a stoplight report with data dashboard images Author: Yunzhong Zhang (ORCID: 0009–0002–8177–419X) Comparing data dashboards is crucial for understanding trends and performance differences. Traditionally, this task required manual effort, which was slow and sometimes inaccurate. Now, thanks to OpenAI’s GPT-4 with Vision (GPT-4V), we are able to automate and improve this process.

Published in Stories by Research Graph on Medium
Author Amanda Kau

Unlocking the Power of Questions — A deep dive into Question Answering Systems Author: Amanda Kau (ORCID: 0009–0004–4949–9284 ) Virtual assistants have popped up on numerous websites over the years.

Published in Stories by Amir Aryani on Medium

Integrating Large Language Models (LLMs) such as GPT into organizations’ data workflows is a complex process with various challenges. These obstacles include but are not limited to technical, operational, ethical, and legal dimensions, each presenting hurdles that organisations must navigate to harness the full potential of LLMs effectively.

Published in Stories by Research Graph on Medium

An Introduction to Retrieval Augmented Generation (RAG) and Knowledge Graph Author Qingqin Fang (ORCID: 0009–0003–5348–4264) Introduction Large Language Models (LLMs) have transformed the landscape of natural language processing, demonstrating exceptional proficiency in generating text that closely resembles human language.

Published in Stories by Amir Aryani on Medium

Authors: Hui Yin, Amir Aryani As we discussed in our previous article “A Brief Introduction to Retrieval Augmented Generation (RAG)”, RAG is an artificial intelligence framework that incorporates the latest reliable external knowledge and aims to improve the quality of responses generated by pre-trained language models (PLM). Initially, it was designed to improve the performance of knowledge-intensive NLP tasks (Lewis et al., 2020). As

Published in Stories by Amir Aryani on Medium

Authors: Hui Yin , Amir Aryani With the increasing application of large language models in various scenarios, people realize that these models are not omnipotent. When generating dialogues (Shuster et al., 2021), the models often produce hallucinations, leading to inaccurate answers.

Published in Stories by Amir Aryani on Medium

I asked Generative AI Models about their context window. Their response was intriguing. The context window for a large language model (LLM) like OpenAI’s GPT refers to the maximum amount of text the model can consider at any one time when generating a response. This includes both the prompt provided by the user and the model’s generated text.

Published in Stories by Amir Aryani on Medium

Prompt engineering is a technique used to effectively communicate with large language models (LLM) like GPT-3 or GPT-4 and get the desired output. Here there are applications of 8 distinct prompt techniques to interact with Mistral AI and find out how we can prompt effectively to learn about Foundation Models. Zero-Shot Learning : This involves giving the AI a task without any prior examples.

Published in Stories by Amir Aryani on Medium

At the time of writing this post (November 19, 2023), there are 401,014 models and transformers listed on Huggingface.co. This number is a testament to the success and impact of the open-source model in the development of AI technologies. However, the challenge at hand is understanding the available models and keeping up to date with the latest developments.