All notable changes to this project will be documented in this file.
ai_core)Smooth_stream — stream transformer that buffers Text_delta and Reasoning_delta chunks and re-emits them in controlled pieces with configurable inter-chunk delays. Five chunking modes: Word (default), Line, Regex (custom Re2 pattern), Segmenter (Unicode UAX#29 word boundaries via uuseg, recommended for CJK), and Custom (user function). Matches the upstream AI SDK's smoothStream transform.?transform parameter on stream_text and server_handler.handle_chat — generic stream transformer (Text_stream_part.t Lwt_stream.t -> Text_stream_part.t Lwt_stream.t) applied between the raw event stream and consumer-facing streams. Both full_stream and text_stream reflect the transformed output.Retry module with jitter, configurable initial delay and backoff factor, and parameter validation. ?max_retries threaded through generate_text, stream_text, and server_handler.handle_chat. Retries only on errors marked retryable.Telemetry module with OpenTelemetry-compatible span instrumentation via the trace library (ocaml-trace). Configurable Telemetry.t settings control enable/disable, input/output recording privacy, function ID, custom metadata, and lifecycle integration callbacks (on_start, on_step_finish, on_tool_call_start, on_tool_call_finish, on_finish). Span hierarchy matches upstream AI SDK: ai.generateText / ai.streamText root spans, *.doGenerate / *.doStream step spans, and ai.toolCall tool execution spans. ?telemetry parameter threaded through generate_text, stream_text, and server_handler.handle_chat.ai_provider)is_retryable field on Provider_error.t — defaults from HTTP status code (429, 5xx are retryable). Anthropic and OpenAI providers set it explicitly based on error classification.smooth_streaming — demonstrates all five chunking modestelemetry_logging — demonstrates integration callbacks for lifecycle loggingre2 (>= 0.16) and uuseg (>= 17.0) to ai_coretrace (>= 0.12) to ai_coreInitial release of the OCaml AI SDK — a type-safe, provider-agnostic AI model abstraction inspired by the Vercel AI SDK, targeting AI SDK v6 wire compatibility.
ai_provider)Provider_options for compile-time type-safe provider-specific settingsPrompt types (System = string only, User = text + files, etc.)Language_model.S module type with first-class module wrapperTool, Tool_choice, Mode, Content foundation typesFinish_reason, Usage, Warning, Provider_error typesProvider.S and Middleware.S module type signaturesCall_options, Generate_result, Stream_part, Stream_result typesai_provider_anthropic)Thinking support with budget_tokens smart constructor (>= 1024)Cache_control for prompt cachingAnthropic_options via the extensible GADT systemmax_tokensai_provider_openai)ai_core)generate_text — synchronous text generation with multi-step tool loopstream_text — streaming text generation with multi-step tool loop, returns synchronously with streams filled by background Lwt taskOutput.text, Output.object_, Output.enum, Output.array, Output.choice with JSON Schema validationdata: {json}\n\n encoding with x-vercel-ai-ui-message-stream: v1 header, all v6 chunk typesUi_message_stream_writer — composable stream builder with write (synchronous) and merge (non-blocking via Lwt.async), lifecycle management, ref-counted in-flight merge tracking, on_finish callbackneeds_approval predicate on Core_tool.t, step loop partitioning, Tool_approval_request chunk type, stateless re-submission with approved_tool_call_idsStop_condition — step loop termination predicates matching upstream stopWhen: step_count_is, has_tool_call, is_met (OR semantics with short-circuit); wired through generate_text, stream_text, and server_handler; max_steps remains as independent hard safety capai-sdk-react)useChat and useCompletion hook bindings for @ai-sdk/reactdata_ui_partclassify function for part type dispatchone_shot, streaming, tool_use, thinking, generate, stream_chat, agent_loop — standalone CLI exampleschat_server — cohttp chat server with React frontend, tool approval, structured outputcustom_stream — custom data streaming with Melange frontendai-e2e — end-to-end Melange app with 11 demos (basic chat, reasoning, tool use, tool approval, client tools, file attachments, structured output, completion, web search, retry/regenerate)generate_opam_files for automated opam file generationmlx-pp / ocamlformat-mlx)