yelnatz
Can you do a column and normalize them?

Too many zeroes for my blind ass making it hard to compare.

simonw
I don't understand how the Claude functionality works.

As far as I know Anthropic haven't released the tokenizer for Claude - unlike OpenAI's tiktoken - but your tool lists the Claude 3 models as supported. How are you counting tokens for those?

J_Shelby_J
Would anybody be interested in this for Rust? I already do everything this library does with the exception of returning the price in my LLM utils crate [1]. I do this just to count tokens to ensure prompts stay within limits. And I also support non-open ai tokenizers. So adding a price calculator function would be trivial.

[1] https://github.com/ShelbyJenkins/llm_utils

Lerc
With all the options there seems like an opportunity for a single point API that can take a series of prompts, a budget and a quality hint to distribute batches for most bang for buck.

Maybe a small triage AI to decide how effectively models handle certain prompts to preserve spending for the difficult tasks.

Does anything like this exist yet?

Ilasky
I dig it! Kind of related, but I made a comparison of LLM API costs vs their leaderboard performance to gauge which models can be more bang for the buck [0]

[0] https://llmcompare.net

sakex
An interesting parameter that I don't read about a lot is vocab size. A larger vocab means you will need to generate less tokens for the same word on average, also the context window will be larger. This means that a model with a large vocab might be more expensive on a per token basis, but would generate less tokens for the same sentence, making it cheaper overall. This should be taken into consideration when comparing API prices.
pamelafox
Are you also accounting for costs of sending images and function calls? I didn't see that when I looked through the code. I developed this package so that I could count those sorts of calls as well: https://github.com/pamelafox/openai-messages-token-helper
oopsallmagic
Can we get conversions for kg of CO2 emitted, too?
zackfield
Very cool! Is this cost directory you're using the best source for historical cost per 1M tokens? https://github.com/BerriAI/litellm/blob/main/model_prices_an...
Karrot_Kream
A whole bunch of the costs are listed as zeroes, with multiple decimal points. I noticed y'all used the Decimal library and tried to hold onto precision so I'm not sure what's going on, but certainly some of the cheaper models just show up as "free".
ilaksh
Nice. Any plans to add calculations for image input for the models that allow that?
yumaueno
What a nice product! I think the way to count tokens depends on the language, but is this only supported in English?
armen99
This is great project! I would love to see something that calculates training costs as well.
jaredliu233
wow, this is really useful!! Just the price list alone has given me a lot of inspiration, thank you
jacobglowbom
Nice. Does it add Vision costs too?
refulgentis
[flagged]