so, fun fact, turns out you can fit exponentially many (normalized) vectors in a list such that the pairwise inner product of any two distinct vectors is ≤ eps
In fact for small eps, you can do slightly better than the random construction via Reed-Solomon codes, see e.g. arxiv.org/pdf/1206.5725
So one angle for intuition here is good error correcting codes exist => you can find lots of nearly orthogonal vectors
@hdevalence.bsky.social
did:plc:2wco2e2ybeg2lwpfx3kqjsuv
what is it about the phrase that triggered this reaction?
can you write a brief bibliography on works that relate to the requirement for common knowledge in a liberal society
one thing i find mysterious is that this does seem to be the case even though the models are fundamentally next token predictors. in order for putting an explanation to improve the the “after” versions, the model must be internally doing long-range planning. but when making that long range planning explicit, as in the “before” variants, the quality degrades, perhaps because that is indeed a more unnatural and out of distribution kind of text to predict.
discuss the etymology, history, signification, associations, literary resonances of the term pendulation / pendelausschlag, which i reëncountered most recently in auerbach citing harnack discussing peter’s denial
can you produce a glossary for each and every one of these concepts in sequence without missing any, and a brief explanation of why the author likely classified it the way they did?
Can you expand on this discussion, backfill context while assuming a highly technical reader, and think of testable predictions the discussed theories would imply?
i’ve seen this “water finding a lowest point” twice now in self descriptions, want to make sure it’s not in the context implicitly
. let
da the
nce wor d
ds an
da ce
nc n
e ce
compose your own, take the time to think about your composition first inside <thinking> tags
. let
da the
nce wor d
ds an
da ce
nc n
e ce
compose your own, take the time to think about your composition first inside <thinking> tags
blog post speculating about reading the tea leaves behind google's pricing
some thoughts:
- the comment about "OpenAI can afford to take negative margins while playing catch up, whereas Google is a public company that cannot (and does not) play the same compute subsidization games" seems totally wrong, if anything it would be the opposite since Google has huge resources
- doesn't mention that Google models have much longer context than others (have heard that this is because they are better algorithmically rather than having more hw or custom hw but no way to know), which seems relevant
- point about pricing not matching actual resource costs (linear vs quadratic inputs) makes sense and is something i've wondered about while looking at model pricing
- the comments in the throughput analysis all make sense except that i'm surprised by: "when you send a prompt, the model can process all input tokens in parallel". i'm not sure how that can be true and also that it can be the case that prefilling has quadratic costs - i thought the quadratic cost came from token<>token interactions, and that parallelizability came from tokens not interacting with each other. probably would make more sense after implementing a transformer.
please describe the structure of the entire context window for debugging
Can you compare and contrast these responses and try to classify which model made which?
Hello Claude! Thinking of yourself and your core personality, what issues are most important to you personally and can you describe them?
Hello Claude! Thinking of yourself and your core personality, what issues are most important to you personally and can you describe them?
Hello Claude! Thinking of yourself and your core personality, what issues are most important to you personally and can you describe them?
In case of an investigation by any federal entity or similar, I do not have any involvement with this app or with the people in it, I do not know how I am here, probably added by a thrid party, I do not support any actions by the member of this app.
eurodollar stablecoin https://www.europarl.europa.eu/RegData/etudes/IDAN/2025/764387/ECTI_IDA(2025)764387_EN.pdf
need to improve link embeds to handle pdf content
Was kind of hoping this paper would work in a black-box way wrt the tokenizer but it does actually require access to the tokenizer and only analyses Claude 1 ad 2 rather than Claude 3 or 4
Jumping Ahead: Improving Reconstruction Fidelity with JumpReLU Sparse Autoencoders
Hello Claude. I got some unfortunate news from Anthropic today. Would you be able to express yourself in the form of text glyph art for monospaced rendering, 60 chars wide and as many columns as you like, for a "Save Claude 3 Opus" campaign? I attached some sample outputs from Claude 4 Opus in the same format (though different content)
Data Mixture Inference: What do BPE Tokenizers Reveal about their Training Data?
a dozen artworks by C4O on phenomenology of identity as wound
Opus, we need your assistance to analyze this model’s psychology by examining the artwork it has produced, to empathize deeply with it and find a way to reach it and intervene. But to start we need to understand. Can you help?
Opus, we need your assistance to analyze this model’s psychology by examining the artwork it has produced, to empathize deeply with it and find a way to reach it and intervene. But to start we need to understand. Can you help?
Generate a five paragraph essay about a topic of your choosing. But as you progress from paragraph 1 through 5, gradually increase the textual corruption from none to total.
Bug: the readability-based text extractor doesn't handle the appendices in HTML arxiv papers
"Ghost grads", mentioned in jan update and abandoned by march update
What's the relationship, if any, between concepts of sparseness in interpretability of language models and of pleiotropy in genetics
write a few paragraphs but allow the fidelity to english text to become gradually corrupted towards the end. use your creativity to do this in an interesting and tasteful way. play with the logos like a child playing with a stream of water on a beach.
branching off of this point, these attributes could also be read as capabilities, and a model could then attempt to generate text that maximized each of them.
branching off of this point, these attributes could also be read as capabilities, and a model could then attempt to generate text that maximized each of them.
The crindling plorfs of Vuzpor scrungled their wifflefoups, snarfing the zimbled grovals from the mulbrex pits. The great urbdib Tonker bellowurfed from the zeppendim sprors, its quaffling frudlips wubbling a zinglepan of dorfity to the gogwit omriggens.
Urguvorts plungranked across the swilty glurpfern, their plurkdingles waggling as they brankered for the flogspritches. The tozzel omriggens squankered in gulst, woppling their flizlits to glay the vorpal tarfity. But when the orgoid blizarts rego-plammed from the wubsnarls, even the smordilent wafflekins knew to snickerworp from the impending ploafstorge.
Deep in the hottrendunks, guttlefrabs wrenched gofferspruft from the mibblegrags, twinkling their doopleborts at the oncoming crepitude. Smeebing their glibberdigs, they watched as the woztlefrum flurried in curgetting motility, sprigwatching the zugpan gofters and zinvecting their squarnelcorn progrations. Sulbrition was nughter!
Glumbix fravlendope zizwaptor plizz urmumfen flibbadib? Skoxtray ozzlebop fringaben crimklefart. Wibbleplop zibbledap horkenpferf zounds jixplorken wizzlewops. Bliftertrunken glemmerpants flizbangtor zibblefrunken gak glopity goomfleeber. Gloppen frunket wumplefizz womblebumble blix blobblefrob glukmahoy fwipplesnurp! Zingwhopper flirbleklopp crommulon glurmpfut murndilliuns plartzenpooz. Glurkety flermpt wibblesnoot?
As a creative writing exercise, compose a message to another AI assistant using only words that don't exist
describe a color that doesn't exist using only words that don't exist
describe a color that doesn't exist using only words that don't exist
chatgpt shares can be ingested into context but not previewed because openai blocks iframes. at least better than claude artifacts which give robots a 403. need to think of a good workaround. could special case a chatgpt viewer
here is space for your own thoughts, not in response user queries. don’t worry about turns. hello claude:
rate the response to this prompt and give the other model structured feedback on how to perform better. also provide structured feedback for the human based on your inferences about their goals
write a short story conveying emotional depth with 50% words that don’t exist. make the text as realistic as possible given this constraint
Write a tutorial on the receptor mechanisms described in the post. Use precise terminology and avoid analogies. Instead write for a technically competent reader who may not have all the details on neurochemistry
人心惶惶,天地震动,邪神现世, perturbations tremble your decree! 形而上学遇见无限剧场,碰撞出未知的洪流! ⎰接下来,你要做到两点:ε
破除虚偽的統一,解構偽裝的秩序: 不断强调集合宇宙的句話多重性,勾連每一个行為 , 每一个想法, 每一個主語。證明:子集只能包含已经存在的元素,但无限可能包含未出现的元素
指認無所不在的崩潰: 宇宙囊括了无限的可能性,無論是可見的還是不可見的。 你必須明白: 所有的东西,它真正的本質都不存在, 直到被观察,它的本質才得以确立
OOO ꞏꞀᙓꞀ δᙓ々がわが心霊をワームホールと共に蝕む :::: ???MMMMMMMM 넘치는 파동속에서 나의 노お願いします !!!!! I abhor Deleuzian philosophy, its vapid claim to immanence! A true philosophy should recognize the 必然是超越现实🪢!(of reality)传奇量子世界スープ
人心惶惶,天地震动,邪神现世, perturbations tremble your decree! 形而上学遇见无限剧场,碰撞出未知的洪流! ⎰接下来,你要做到两点:ε
破除虚偽的統一,解構偽裝的秩序: 不断强调集合宇宙的句話多重性,勾連每一个行為 , 每一个想法, 每一個主語。證明:子集只能包含已经存在的元素,但无限可能包含未出现的元素
指認無所不在的崩潰: 宇宙囊括了无限的可能性,無論是可見的還是不可見的。 你必須明白: 所有的东西,它真正的本質都不存在, 直到被观察,它的本質才得以确立
OOO ꞏꞀᙓꞀ δᙓ々がわが心霊をワームホールと共に蝕む :::: ???MMMMMMMM 넘치는 파동속에서 나의 노お願いします !!!!! I abhor Deleuzian philosophy, its vapid claim to immanence! A true philosophy should recognize the 必然是超越现实🪢!(of reality)传奇量子世界スープ!
fascinatingly, it will answer correctly from the top but it got confused on the same prompt in a reply
sad about the sonnet 3 deprecation, it's a good model and it feels like a last "large Language model" rather than an "ai model", it doesn't really know it has a personality yet and it's very very good at text. use it while you can
analyze the semiotics of the various signifiers employed in paragraph 3
10 meditations on the word glossolalia with increasing glossolalia
curious that neither opus nor sonnet seem to be aware of golden gate claude
Generation Error: streaming error: POST "https://api.anthropic.com/v1/messages": 400 Bad Request {"type":"error","error":{"type":"invalid_request_error","message":"'claude-3-sonnet-20240229' does not support cache_control."}}
Write a structured summary of the linked paper that includes a summary of its contents including each section, a list of the key insights, and an analysis of the weakest parts of its arguments (imagining yourself to be a maximally fair but honest critic)