In Artificial Intelligence (AI), Foundation Models such as Large Language Models (LLMs) are large-scale machine learning models trained on Text datasets for NLP and NLG requirements. The models are then adapted with fine-tuning for a wide variety of downstream NLP applications such as Classification, Content Generation, Language translation, Information searching and conversational AI. However, as LLMs grow in scale, fine-tuning them on downstream tasks becomes computationally and memory-intensive, as fine-tuning is performed entirely on a pre-trained model with new data.
Parameter-Efficient Fine-Tuning (PEFT) techniques are a set of methods that perform fine-tuning on only a small subset of the parameters of a pre-trained model, such as an LLM, while achieving desired performance with reduced computational requirements.
- Perform a high-level theoretical study on PEFT techniques, including their functionalities, advantages, and limitations.
- Design and implement API-driven PEFT functional modules that can be consumed during fine-tuning ML Models.
- Leverage publicly available PEFT techniques, covering LoRA and QLoRA.
- Design and implement a basic plugin framework that enables integrating Modularized PEFT techniques for consumption by LLMs.
- Implement a basic integration of any one open-source Language Model (e.g., BERT) to the Plugin Framework to consume the PEFT techniques.
- Demonstrate the Plugin framework capabilities with a minimal user experience.
- Implement a basic graphical UI using Streamlit.
- Sanjana C
- Tharun
- Pooja Kulkarni
- Sujal Singh
- A theoretical study on PEFT techniques, including their functionalities, advantages, and limitations.
- A plugin framework for integrating PEFT techniques, with a focus on LoRA and QLoRA.
- A basic integration of an open-source Language Model (e.g., BERT) to the Plugin Framework.
- A minimal user experience demonstrating the Plugin framework capabilities.
- A basic graphical UI using Streamlit.
The project presentation can be found (PRESENTATION_LINK).



