💬Local inferencesHarness the power of modern consumer grade hardware to run different LLMs locally, with minimal loss in performance.