MonceApp v0.1.0

Try it — runs on EC2 right now

prompt:

Install

pip install git+https://github.com/Monce-AI/monceai-sdk.git

3 Constructors — str in, str out

ConstructorReturnsWhat
Charles("6x7")strMath, science, general — routes to best engine
Moncey("44.2 feuillete")strGlass sales agent — snake + classifiers + Haiku
Json("5 primes")dictStructured JSON — dict subclass, print = json.dumps(indent=2)

1. Charles — static (blocks) vs client (parallel)

from monceai import Charles

# Static — blocks, returns str
Charles("6x7")                     # → "42"
Charles("pi+e")                    # → "5.859874482"
Charles("factor 10403")             # → "101 × 103"

# Client — fires parallel futures
c = Charles()
a = c("6x7")                       # fires in 0ms
b = c("8x9")                       # fires in 0ms
d = c("pi+e")                      # fires in 0ms
print(a, b, d)                     # blocks on first read, all done

2. Moncey — glass industry sales agent

from monceai import Moncey

# Static — blocks, returns str
Moncey("44.2 feuillete LowE 16mm")  # → "Bonjour, Feuilleté 44.2..."

# Client — parallel futures
m = Moncey()
a = m("devis 44.2")                 # fires in 0ms
b = m("relance commande 4523")      # fires in 0ms
print(a, b)                        # blocks on read

3. Json — structured output, chains with Charles + Moncey

from monceai import Json, Moncey

# Standalone
Json("list 5 primes")              # → {"primes": [2, 3, 5, 7, 11]}

# Chain — Moncey resolves, Json structures
Json("Extract order: " + Moncey("44.2 Silence/16 alu/4 JPP"))
# → {"articles": [{"name": "Feuilleté 44.2", "ref": "#60442"}, ...]}

4. VLM — image + text

from monceai import VLM

r = VLM("extract fields", image=open("order.png", "rb").read())
print(r.json)

5. LLM — direct model access

from monceai import LLM

LLM("hello", model="haiku")       # fast, cheap
LLM("bonjour", model="sonnet")    # premium
LLM("hello", model="nova-micro")  # cheapest

6. curl — no SDK needed

# Chat
curl -sX POST https://monceapp.aws.monce.ai/v1/chat \
  -F "message=6x7" -F "model_id=charles-auma" | jq .reply

# Calc (pure compute, no model)
curl -sX POST https://monceapp.aws.monce.ai/v1/calc \
  -H "Content-Type: application/json" -d '{"expression":"pi+e"}' | jq .result

14 Models

ShorthandEngineSpeedCost/msgBest for
charles-aumaHaiku → AUMA {0,1}^n → Haiku3-8s~$0.003Math, roots, factoring
charles-scienceSnake router → 7 services → Sonnet15-60s~$0.01Science, SAT, chess, sudoku
charles4x parallel → Sonnet8-15s~$0.01Deep analysis
charles-jsonMemory → Sonnet JSON (VLM)5-15s~$0.01Structured output, images
charles-architectMemory → Sonnet ASCII5-15s~$0.01Diagrams, charts
concisecharles → Haiku TL;DR10-20s~$0.01Short answers with depth
cccharles ∥ concise → synthesis12-25s~$0.02Best of both
monceysnake/comprendre + classifiers → Haiku3-8s~$0.002Glass sales agent
sonnetSonnet 4.6 + tools1-3s~$0.03Premium quality
haikuHaiku 4.5 + tools1-2s~$0.003Fast, cheap
nova-proNova Pro0.8s~$0.008Amazon, fast
nova-liteNova Lite0.7s~$0.001Bulk queries
nova-microNova Micro0.6s~$0.0005Cheapest possible

LLMResult — what you get back

from monceai import LLM

r = LLM("factor 10403", model="charles-auma")

r.text         # "10403 = 101 × 103"
r.json         # parsed dict (charles-json only) or None
r.ok           # True
r.model        # "monceai-charles-auma"
r.elapsed_ms   # 4200
r.input_tokens # 314
r.output_tokens# 144
r.sat_memory   # {"formula": "-(10403-x*y)**2", "auma_x": [101, 103], ...}
r.raw          # full API response dict

REST API endpoints

MethodEndpointWhat
POST/v1/chatChat with any model (text + optional image)
POST/v1/calcExact arithmetic — any formula, no model
POST/v1/enhancePrompt enhancement (str → str + context)
POST/v1/diffRaw vs enhanced model comparison
GET/v1/modelsList all available models
GET/v1/factoriesFactory registry (9 glass factories)
GET/availablePing all models live

Live — try it now

Ask charles anything:

No API key. No signup. No billing. pip install monceai and go.
github.com/Monce-AI/monceai-sdk · Built by Charles Dana at Monce SAS