Function Calling with LLM vs. API Management

Photo by Trent Erwin on Unsplash

Function Calling with LLM vs. API Management

Several commercial LLM models like OpenAI's GPT-4 (and some models of 3.5 family) and AlephAlpha provide the technique called Function Calling which means the ability of LLM to call back your code that somehow baked and exposed the function for this purpose. The exposition is done through the prompt's context with a special syntax. Then if during the completion processing, LLM decides that the information it needs might be obtained from the exposed function, it issues the callback request with baked parameters for the actual function call. It's then the responsibility of the client to issue the call into the function (by whatever meaning it comprehends this) and inject the (partial or full) results into the prompt for further LLM processing. This round-trip may occur several times during a single prompt processing.

Note that the function invoked may not be only "informative". It could send e-mail, post something online, make a purchase, etc.

Example

We are going to request OpenAI's completion endpoint and expose the simple function that receives a single parameter of a type object:

POST /v1/chat/completions HTTP/1.1
Host: api.openai.com
Content-Type: application/json
Authorization: Bearer XXX
{
    "model": "gpt-4",
    "messages": [{
        "role": "user",
        "content": "What's weather in Tel-Aviv?"
    }],
    "functions":[{
        "name": "get_current_weather",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string"

                }
            },
            "required": ["location"]
        }
    }],
    "max_tokens": 256,
    "temperature": 0.8
}

The successful response to this request provides the template for the function invocation

{
    "id": "chatcmpl-8vssxNjof6PsNHIVqRWw2Xhh9ZakX",
    "object": "chat.completion",
    "created": 1708806367,
    "model": "gpt-4-0613",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": null,
                "function_call": {
                    "name": "get_current_weather",
                    "arguments": "{\n  \"location\": \"Tel-Aviv\"\n}"
                }
            },
            "logprobs": null,
            "finish_reason": "function_call"
        }
    ],
    "usage": {
        "prompt_tokens": 48,
        "completion_tokens": 19,
        "total_tokens": 67
    },
    "system_fingerprint": null
}

Note that SemanticKernel (SK) framework greatly simplifies the job. You just decorate the .NET function with [KernelFunction] attribute within the class SK will not just expose this function to LLM but will call if LLM requests that.

SK jargon defines this as "Plugins", and they are interchangeable between ChatGPT, Bing, and Microsoft 365.

    public class LightPlugin
    {
        public bool IsOn { get; set; } = false;

        [KernelFunction, Description("Explains how to turn lights on for plugin")]
        public string HowTo(string input)
        {
            //var args = new KernelArguments { { "input", input } };
            return "Simply press 'Restart' button";
        }

        [KernelFunction, Description("Get the state of the light")]
        public string GetState() => this.IsOn ? "on" : "off";

        [KernelFunction, Description("Changes the state of the light")]
        public string ChangeState(bool newState)
        {
            this.IsOn = newState;
            var state = this.GetState();

            Console.ForegroundColor = ConsoleColor.DarkBlue;
            Console.WriteLine($"[Light is now {state}]");
            Console.ResetColor();

            return state;
        }
    }

Usage of this plugin:

IKernelBuilder kernelBuilder = Kernel.CreateBuilder()
                            .AddAzureOpenAIChatCompletion(
                                                     "gpt4", // Azure OpenAI Deployment Name
                                                     openaiEndpoint,
                                                     openaiAzureKey
                                                     );
kernelBuilder.Plugins.AddFromType<LightPlugin>();
Kernel kernel = kernelBuilder.Build();

const string promptTemplate = @"
     Look at the text below: {{$input}} {{LightPlugin.HowTo}} lights on for plugin?";
// Invoke the kernel with a templated prompt that invokes a plugin and display the result
FunctionResult result = await kernel.InvokePromptAsync(promptTemplate);
await Console.Out.WriteLineAsync(result.GetValue<string>());