1 Connecting to a Backend
lmg-anon edited this page 2025-11-05 16:44:48 -03:00

To start using mikupad, you first need to connect it to an LLM backend. All connection settings are found in the Parameters section of the sidebar.

image
  1. Server URL: In the Server field, enter the full URL of your LLM backend. For a local llama.cpp server instance, this is typically something like http://127.0.0.1:8080.

  2. API Type: From the API dropdown, select the type of backend you are running.

    • llama.cpp
    • KoboldCpp
    • OpenAI Compatible (for backends like TabbyAPI, OpenRouter, etc.)
    • AI Horde (connects to the distributed AI Horde network, learn more here)
  3. API Key (Optional): If your backend or service (like OpenAI chat completion) requires an API key, enter it in the API Key field. You can toggle the key's visibility with the eye icon.

  4. Model (Optional): For OpenAI Compatible and AI Horde APIs, you can specify which model you want to use.

  5. Strict API (Optional): You should check this when using the official OpenAI API or any other API that rejects non-standard fields in API requests.

Once your settings are configured, you can begin writing in the main text area. Press the Predict button or Ctrl+Enter to start generating text.