7 weeks (from 4.9. to 22.10.2023) in the world of Large Language Models and Generative AI tools, this time more focused on the engineering side:
Prompt engineering:
Parallel processing in prompt engineering: the skeleton-of-thought technique.
Unlocking reliable generations through Chain-of-Verification - a leap in prompt engineering.
LLMOps: production prompt engineering patterns with Hamilton.
Crafting different types of program simulation prompts - defining the new program simulation prompt framework.
Some kick-ass prompt engineering techniques to boost our LLM models.
And other prompt engineering tips, a neural network how-to, and recent must-reads.
AI Development and Engineering:
The team behind GitHub Copilot shares its lessons from building the app.
Amazon Bedrock for building and scaling generative applications is now generally available.
Experience from building generative AI apps on Amazon Web Services, using Amazon Bedrock and SageMaker.
A guide with 7 steps for mastering LLMs.
Key tools for enhancing Generative AI in Data Lake Houses.
An introduction to loading Large Language models.
Introduction to ML engineering and LLMOps with OpenAI and LangChain.
MLOps and LLM deployment strategies for software engineers.
Modern MLOps platform for Generative AI.
Leveraging the power of LLMs to guide AutoML hyperparameter searches.
LLMs demand Observability-Driven Development.
LLM monitoring and observability — a summary of techniques and approaches.
How to build and benchmark your LLM evals.
A step-by-step guide to selecting and running your own generative model.
Google Research: Outperforming larger language models with less training data and smaller model sizes - distilling step-by-step.
Google Research: Rethinking calibration for in-context learning and prompt engineering.
Apache Kafka as a mission-critical Data Fabric for GenAI.
Training ChatGPT on your own data.
Hugging Face's guide to optimizing LLMs in production.
Hugging Face is becoming the "GitHub" for Large Language Models.
Building microservice for multi-chat backends using Llama and ChatGPT.
Connect GPT models with company data in Microsoft Azure.
Tuning LLMs with MakerSuite.
Fine-tuning LLMs: Parameter Efficient Fine Tuning (PEFT), LoRA and QLoRA.
How to train BERT for masked language modeling tasks.
Extending context length in Large Language Models.
Conversational applications with Large Language Models understanding the sequence of user inputs, prompts, and responses.
Using data lakes and Large Language Models in development.
How to build an LLM from scratch.
LLM output parsing: function calling vs. LangChain.
Enhancing the power of Llama 2: 3 easy methods for improving your Large Language Model.
Keeping LLMs relevant and current - Retrieval Augmented Generation (RAG).
Build and deploy Retrieval Augmented Generative Pipelines with Haystack.
Why your RAG is not reliable in a production environment.
QCon San Francisco:
Unlocking enterprise value with Large Language Models.
A modern compute stack for scaling large AI, ML, & LLM workloads.
No comments:
Post a Comment