模型列表 (GET /api/tags)

模型列表接口返回本地已安装的所有模型信息,包括模型名称、大小、修改时间等。

基本用法

curl http://localhost:11434/api/tags

响应:

{
  "models": [
    {
      "name": "llama3.2:latest",
      "modified_at": "2024-01-15T10:00:00Z",
      "size": 4661224676,
      "digest": "abc123...",
      "details": {
        "format": "gguf",
        "family": "llama",
        "parameter_size": "3B",
        "quantization_level": "Q4_K_M"
      }
    },
    {
      "name": "mistral:latest",
      "modified_at": "2024-01-14T08:30:00Z",
      "size": 4108928000,
      "digest": "def456...",
      "details": {
        "format": "gguf",
        "family": "mistral",
        "parameter_size": "7B",
        "quantization_level": "Q4_K_M"
      }
    }
  ]
}

响应字段

每个模型包含以下信息:

字段说明
name模型名称和标签
modified_at最后修改时间
size模型大小(字节)
digest模型哈希值
details模型详细信息

details 字段

字段说明
format模型格式(通常是 gguf)
family模型家族(llama、mistral 等)
parameter_size参数量(7B、13B 等)
quantization_level量化级别(Q4_K_M 等)

代码示例

Python

import requests

def list_models():
    response = requests.get("http://localhost:11434/api/tags")
    return response.json()["models"]

models = list_models()
for model in models:
    size_gb = model["size"] / (1024 ** 3)
    print(f"{model['name']}: {size_gb:.2f} GB")

输出:

llama3.2:latest: 4.34 GB
mistral:latest: 3.83 GB

查找特定模型

def model_exists(model_name):
    models = list_models()
    return any(m["name"].startswith(model_name) for m in models)

if model_exists("llama3.2"):
    print("模型已安装")
else:
    print("模型未安装")

获取模型详情

def get_model_info(model_name):
    models = list_models()
    for model in models:
        if model["name"].startswith(model_name):
            return model
    return None

info = get_model_info("llama3.2")
if info:
    print(f"名称: {info['name']}")
    print(f"大小: {info['size'] / (1024**3):.2f} GB")
    print(f"参数量: {info['details']['parameter_size']}")
    print(f"量化: {info['details']['quantization_level']}")

JavaScript

async function listModels() {
    const response = await fetch('http://localhost:11434/api/tags');
    const data = await response.json();
    return data.models;
}

const models = await listModels();
models.forEach(model => {
    const sizeGB = (model.size / (1024 ** 3)).toFixed(2);
    console.log(`${model.name}: ${sizeGB} GB`);
});

Go

package main

import (
    "encoding/json"
    "fmt"
    "io"
    "net/http"
)

type ModelDetails struct {
    Format            string `json:"format"`
    Family            string `json:"family"`
    ParameterSize     string `json:"parameter_size"`
    QuantizationLevel string `json:"quantization_level"`
}

type Model struct {
    Name       string       `json:"name"`
    ModifiedAt string       `json:"modified_at"`
    Size       int64        `json:"size"`
    Digest     string       `json:"digest"`
    Details    ModelDetails `json:"details"`
}

type ModelsResponse struct {
    Models []Model `json:"models"`
}

func listModels() ([]Model, error) {
    resp, err := http.Get("http://localhost:11434/api/tags")
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()
    
    data, _ := io.ReadAll(resp.Body)
    var result ModelsResponse
    json.Unmarshal(data, &result)
    
    return result.Models, nil
}

func main() {
    models, _ := listModels()
    for _, m := range models {
        sizeGB := float64(m.Size) / (1024 * 1024 * 1024)
        fmt.Printf("%s: %.2f GB\n", m.Name, sizeGB)
    }
}

实际应用

检查并拉取模型

import requests

def ensure_model(model_name):
    models = requests.get("http://localhost:11434/api/tags").json()["models"]
    
    for model in models:
        if model["name"].startswith(model_name):
            print(f"模型 {model_name} 已存在")
            return True
    
    print(f"模型 {model_name} 不存在,正在拉取...")
    response = requests.post(
        "http://localhost:11434/api/pull",
        json={"name": model_name},
        stream=True
    )
    
    for line in response.iter_lines():
        if line:
            import json
            data = json.loads(line)
            if data.get("status"):
                print(data["status"])
    
    return True

ensure_model("llama3.2")

模型管理工具

def model_manager():
    while True:
        print("\n模型管理")
        print("1. 列出所有模型")
        print("2. 查看模型详情")
        print("3. 删除模型")
        print("4. 退出")
        
        choice = input("选择操作: ")
        
        if choice == "1":
            models = list_models()
            for i, m in enumerate(models, 1):
                size_gb = m["size"] / (1024 ** 3)
                print(f"{i}. {m['name']} ({size_gb:.2f} GB)")
        
        elif choice == "2":
            name = input("输入模型名称: ")
            info = get_model_info(name)
            if info:
                print(f"名称: {info['name']}")
                print(f"大小: {info['size'] / (1024**3):.2f} GB")
                print(f"参数量: {info['details']['parameter_size']}")
                print(f"量化: {info['details']['quantization_level']}")
            else:
                print("模型未找到")
        
        elif choice == "3":
            name = input("输入要删除的模型名称: ")
            response = requests.delete(
                "http://localhost:11434/api/delete",
                json={"name": name}
            )
            if response.status_code == 200:
                print("删除成功")
            else:
                print(f"删除失败: {response.json()}")
        
        elif choice == "4":
            break