To start using mikupad, you first need to connect it to an LLM backend. All connection settings are found in the Parameters section of the sidebar.
-
Server URL: In the
Serverfield, enter the full URL of your LLM backend. For a local llama.cpp server instance, this is typically something likehttp://127.0.0.1:8080. -
API Type: From the
APIdropdown, select the type of backend you are running.llama.cppKoboldCppOpenAI Compatible(for backends like TabbyAPI, OpenRouter, etc.)AI Horde(connects to the distributed AI Horde network, learn more here)
-
API Key (Optional): If your backend or service (like OpenAI chat completion) requires an API key, enter it in the
API Keyfield. You can toggle the key's visibility with the eye icon. -
Model (Optional): For
OpenAI CompatibleandAI HordeAPIs, you can specify which model you want to use. -
Strict API (Optional): You should check this when using the official OpenAI API or any other API that rejects non-standard fields in API requests.
Once your settings are configured, you can begin writing in the main text area. Press the Predict button or Ctrl+Enter to start generating text.