llama3
Here are 330 public repositories matching this topic...
Choose the model that's right for you
-
Updated
Jun 5, 2024 - Python
Easy "1-line" calling of all LLMs from OpenAI, MS Azure, AWS Bedrock, GCP Vertex, and Ollama
-
Updated
Jun 11, 2024 - Python
Chat locally using leading open models built by the community, optimized and accelerated by NVIDIA's enterprise-ready inference runtime
-
Updated
Jun 11, 2024 - Python
-
Updated
May 15, 2024 - Java
Adapted BERTopic pipeline for Topic Modeling the arXiv dataset
-
Updated
Jun 8, 2024 - Python
Testing the capabilities of the Llama 3 language model, specifically the Meta-Llama-3-8B-Instruct variant with 8 billion parameters.
-
Updated
Jun 2, 2024 - Python
Este repositório contém um exemplo de como criar agentes de IA usando a biblioteca CrewAI, o modelo Llama3 e a API Groq em Python. O objetivo é fornecer uma estrutura básica para configurar e executar agentes de IA.
-
Updated
Jun 4, 2024 - Python
An attempt to run ollama on kuberenetes
-
Updated
Jun 9, 2024
Bot en telegram con IA de llama 2 and llama 3
-
Updated
Jun 11, 2024 - Java
how to create a local RAG (Retrieval Augmented Generation) pipeline that processes and allows you to chat with your PDF file(s) using Ollama and LangChain!
-
Updated
May 11, 2024 - Jupyter Notebook
Automatically creating Google Calendar Events from time sensitive items on my to do lists (.md files)
-
Updated
May 25, 2024 - Python
Improve this page
Add a description, image, and links to the llama3 topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama3 topic, visit your repo's landing page and select "manage topics."