| .. |
|
conftest.py
|
llm.get_async_model(), llm.AsyncModel base class and OpenAI async models (#613)
|
2024-11-13 17:51:00 -08:00 |
|
test-llm-load-plugins.sh
|
Test using test-llm-load-plugins.sh
|
2024-01-25 17:44:34 -08:00 |
|
test_aliases.py
|
Make tests robust against extra plugins, closes #258
|
2023-09-10 11:21:04 -07:00 |
|
test_async.py
|
Test for basic async conversation, refs #632
|
2024-11-14 14:28:17 -08:00 |
|
test_attachments.py
|
Special case treat audio/wave as audio/wav, closes #603
|
2024-11-07 17:13:54 -08:00 |
|
test_chat.py
|
llm.get_async_model(), llm.AsyncModel base class and OpenAI async models (#613)
|
2024-11-13 17:51:00 -08:00 |
|
test_cli_openai_models.py
|
audio/wav not audio/wave, refs #603
|
2024-11-12 21:43:07 -08:00 |
|
test_embed.py
|
batch_size= argument to embed_multi(), refs #273
|
2023-09-13 16:24:04 -07:00 |
|
test_embed_cli.py
|
Windows readline fix, plus run CI against macOS and Windows
|
2024-01-26 16:24:58 -08:00 |
|
test_encode_decode.py
|
NumPy decoding docs, plus extra tests for llm.encode/decode
|
2023-09-14 14:01:47 -07:00 |
|
test_keys.py
|
llm keys get command, refs #623
|
2024-11-11 09:47:13 -08:00 |
|
test_llm.py
|
llm.get_async_model(), llm.AsyncModel base class and OpenAI async models (#613)
|
2024-11-13 17:51:00 -08:00 |
|
test_migrate.py
|
Binary embeddings (#254)
|
2023-09-11 18:58:44 -07:00 |
|
test_plugins.py
|
Moved iter_prompt from Response to Model, moved a lot of other stuff
|
2023-07-10 07:45:11 -07:00 |
|
test_templates.py
|
Upgrade to pytest-httpx>=0.33.0
|
2024-10-28 15:41:34 -07:00 |