Module: OllamaChat::ModelHandling
- Included in:
- Chat
- Defined in:
- lib/ollama_chat/model_handling.rb
Overview
A module that provides functionality for managing Ollama models, including checking model availability, pulling models from remote servers, and handling model presence verification.
This module encapsulates the logic for interacting with Ollama models, ensuring that required models are available locally before attempting to use them in chat sessions. It handles both local model verification and remote model retrieval when necessary.
Defined Under Namespace
Classes: ModelMetadata
Instance Method Summary collapse
-
#choose_model(cli_model, current_model) ⇒ Object
private
The choose_model method selects a model from the available list based on CLI input or user interaction.
-
#model_present?(model) ⇒ ModelMetadata, FalseClass
private
The model_present? method checks if the specified Ollama model is available.
-
#model_with_size(model) ⇒ Object
private
The model_with_size method formats a model’s size for display by creating a formatted string that includes the model name and its size in a human-readable format with appropriate units.
-
#pull_model_from_remote(model) ⇒ Object
private
The pull_model_from_remote method attempts to retrieve a model from the remote server if it is not found locally.
-
#pull_model_unless_present(model) ⇒ ModelMetadata
private
The pull_model_unless_present method ensures that a specified model is available on the Ollama server.
-
#use_model(model = nil) ⇒ ModelMetadata
private
The use_model method selects and sets the model to be used for the chat session.
Instance Method Details
#choose_model(cli_model, current_model) ⇒ Object (private)
The choose_model method selects a model from the available list based on CLI input or user interaction. It processes the provided CLI model parameter to determine if a regex selector is used, filters the models accordingly, and prompts the user to choose from the filtered list if needed. The method ensures that a model is selected and displays a connection message with the chosen model and base URL.
144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
# File 'lib/ollama_chat/model_handling.rb', line 144 def choose_model(cli_model, current_model) selector = if cli_model =~ /\A\?+(.*)\z/ cli_model = '' Regexp.new($1) end models = ollama..models.sort_by(&:name).map { |m| model_with_size(m) } selector and models = models.select { _1.value =~ selector } model = if models.size == 1 models.first elsif cli_model == '' OllamaChat::Utils::Chooser.choose(models)&.value || current_model else cli_model || current_model end ensure (model, ollama.base_url) end |
#model_present?(model) ⇒ ModelMetadata, FalseClass (private)
The model_present? method checks if the specified Ollama model is available.
44 45 46 47 48 49 50 51 52 53 54 |
# File 'lib/ollama_chat/model_handling.rb', line 44 def model_present?(model) ollama.show(model:) do |md| return ModelMetadata.new( model, md.system, md.capabilities, ) end rescue Ollama::Errors::NotFoundError false end |
#model_with_size(model) ⇒ Object (private)
The model_with_size method formats a model’s size for display by creating a formatted string that includes the model name and its size in a human-readable format with appropriate units.
96 97 98 99 100 101 |
# File 'lib/ollama_chat/model_handling.rb', line 96 private def model_with_size(model) formatted_size = Term::ANSIColor.bold { Tins::Unit.format(model.size, unit: ?B, prefix: 1024, format: '%.1f %U') } SearchUI::Wrapper.new(model.name, display: "#{model.name} #{formatted_size}") end |
#pull_model_from_remote(model) ⇒ Object (private)
The pull_model_from_remote method attempts to retrieve a model from the remote server if it is not found locally.
60 61 62 63 |
# File 'lib/ollama_chat/model_handling.rb', line 60 def pull_model_from_remote(model) STDOUT.puts "Model #{bold{model}} not found locally, attempting to pull it from remote now…" ollama.pull(model:) end |
#pull_model_unless_present(model) ⇒ ModelMetadata (private)
The pull_model_unless_present method ensures that a specified model is available on the Ollama server. It first checks if the model metadata exists locally; if not, it pulls the model from a remote source and verifies its presence again. If the model still cannot be found, it raises an UnknownModelError indicating the missing model name.
76 77 78 79 80 81 82 83 84 85 86 |
# File 'lib/ollama_chat/model_handling.rb', line 76 def pull_model_unless_present(model) if = model_present?(model) return else pull_model_from_remote(model) if = model_present?(model) return end raise OllamaChat::UnknownModelError, "unknown model named #{@model.inspect}" end end |
#use_model(model = nil) ⇒ ModelMetadata (private)
The use_model method selects and sets the model to be used for the chat session.
It allows specifying a particular model or defaults to the current model. After selecting, it pulls the model metadata if necessary. If think? is true and the chosen model does not support thinking, the think mode selector is set to ‘disabled’. If tools_support.on? is true and the chosen model does not support tools, tool support is disabled. Returns the metadata for the selected model.
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
# File 'lib/ollama_chat/model_handling.rb', line 117 def use_model(model = nil) if model.nil? @model = choose_model('', @model) else @model = choose_model(model, config.model.name) end @model_metadata = pull_model_unless_present(@model) if think? && !@model_metadata.can?('thinking') think_mode.selected = 'disabled' end if tools_support.on? && !@model_metadata.can?('tools') tools_support.set false end @model_metadata end |