[Gardner Analytics Apartment — January 2014, Evening]
The ChronoCloud dashboard loaded in three seconds flat, which was three seconds longer than it should have taken on future-grade infrastructure. Ethan assumed the latency was intentional. A reminder that the connection between 2014 and whatever temporal server farm powered this service was not, and would never be, seamless.
He'd spent the afternoon with Sarah at the café — three hours of conversation that covered more technical ground than most PhD defenses. She was sharp. Sharper than the 9.5 suggested, because the number only measured potential, and Sarah had already converted a significant portion of hers into actual skill. Her background came out in fragments between bites of a club sandwich she'd ordered without hesitation and eaten without apology.
Stanford dropout. Left halfway through a CS master's program when the funding dried up and her advisor took a sabbatical. Two years of industry work at a startup that cratered during a pivot, leaving her with a résumé gap and a distrust of founders who spoke in buzzwords. Six months as a barista, which paid rent and gave her time to study the things Stanford hadn't covered — systems architecture, optimization theory, the mathematical foundations that formal education treated as electives.
The notebook was her research. Recurrent neural network variants with custom gating mechanisms. She'd been designing them alone, without access to compute, without an advisor, without anyone to talk to about them. A 9.5 running experiments on paper because she couldn't afford a GPU.
Ethan had told her about the project in broad strokes. An AI company. Language generation. A new architecture — he'd been vague on the details, deflecting specific questions with "I'll show you the codebase" and "it's easier to understand when you see it running." She'd been skeptical but engaged. She hadn't said yes to the job. She hadn't said no.
Now, back at the apartment, Ethan stared at the ChronoCloud interface and prepared to spend money he couldn't afford.
The test model was ready. Not the full Transformer — a stripped-down version. Two encoder layers instead of six. Two attention heads instead of eight. A tiny vocabulary, fifty thousand tokens, trained on a small text corpus he'd scraped from Project Gutenberg. The point wasn't to build something useful. The point was to verify that his code actually trained. That gradients flowed. That loss decreased. That the architecture in his head translated to a functioning neural network.
He configured the instance. V100-equivalent, single GPU. Estimated training time: sixteen hours at batch size 32. Estimated cost: $800 at $50 per hour.
His finger hovered over the LAUNCH button.
Eight hundred dollars. Rent was $1,800, due in eleven days. Food this week had cost $47. The coffee and Sarah's sandwich totaled $18. His checking account was at $2,100 and falling. His savings were at $8,544 — the number seared into his memory from the bank statement.
This training run would leave him at $7,744. Two more months of rent. Three weeks of food. And nothing left for the real training run — the full-scale Transformer that would need hundreds of GPU-hours and tens of thousands of dollars.
He pressed LAUNCH.
The dashboard updated. Instance spinning up. A progress bar appeared — Initializing training environment. Then: Loading model configuration. Then: Beginning epoch 1/50.
Loss: 11.34. High. Expected for a randomly initialized model on its first pass. The number would drop. If the code was correct, if the hyperparameters were reasonable, if the data pipeline wasn't corrupted, the loss would drop steadily over the next sixteen hours until it settled somewhere in the low single digits.
If.
Ethan leaned back. The chair creaked. The apartment was quiet except for the laptop's fan, which shouldn't have been running hard — the training was happening on a server somewhere outside of time — but spun up anyway, as if the machine itself was anxious.
He opened another tab and refreshed his bank balance. Then closed the tab. Opened it again. Closed it.
Stop.
The microwave clock read 7:43 PM. He'd eaten Sarah's leftover sandwich crusts and a handful of trail mix from a bag he'd found in the back of the pantry. Actual cooking required groceries. Groceries required money. Money required not spending it all on temporal GPU rentals.
He found a packet of instant ramen in the cabinet. Beef flavor. The last one — he'd need to buy more, or graduate to real food, or accept that his diet in this timeline was going to consist exclusively of noodles and borrowed sandwiches.
The microwave hummed. The ramen rotated on the turntable. Ethan watched it through the smudged door glass, counting revolutions because it was easier than calculating how many training runs he could afford before the money ran out.
The answer was: not enough. The math was brutal.
Minimum viable Transformer — the smallest model that could demonstrate coherent text generation — would need approximately a hundred GPU-hours on V100-equivalents. At $50 per hour, that was $5,000 for compute alone. But that assumed perfect hyperparameters on the first try. In practice, training runs failed. Learning rates were wrong. Gradient explosions happened. Data bugs emerged at hour sixty that invalidated everything before them.
A realistic budget was three to five training runs. $15,000 to $25,000 for compute. Plus rent. Plus food. Plus the minimum expenses of existing as a human being in San Francisco.
Total needed: $50,000, minimum. Available: $8,544 and shrinking.
The gap was unbridgeable through personal savings. He needed outside money. Investment. Venture capital.
In 2014. For an AI company. Building technology that wouldn't be taken seriously for three years.
The microwave beeped. Ethan pulled out the ramen. Steam rose from the styrofoam cup. He ate standing at the counter, burning the roof of his mouth on the first bite because patience was a resource he'd exhausted along with his bank account.
---
[Same Apartment — 11:00 PM]
The training run had been going for three hours. Loss: 8.21. Dropping steadily. The curve was clean — no spikes, no plateaus, no signs of instability. The code worked. The architecture translated.
But the invoice was already generating.
ChronoCloud's billing was real-time. A ticker in the corner of the dashboard showed the running cost: $152.50. Three hours. By morning, it would cross $400. By the time the training completed, somewhere north of $800.
Ethan stared at the invoice preview. There was no negotiation with ChronoCloud. No customer service number. No "contact us for enterprise pricing." The interface was transactional and absolute. You paid what it cost. The service existed, it worked, and it extracted every dollar of value from the temporal advantage it provided.
He pulled up the pitch deck — the revised version, the one he'd started before Sarah's input had rendered it obsolete. What if a computer could write?
The deck needed work. It needed Sarah's clarity. It needed demo material that didn't exist yet because the model was still training. It needed answers to questions that VCs would ask — market size, competitive landscape, revenue model, team credentials — that Ethan couldn't honestly provide without lying about things he was already lying about.
How do you explain the future to people living in the past?
The answer, he was beginning to understand, was that you didn't. You showed them the present. You showed them output. You showed them a computer generating text that sounded human, and you let their imaginations do the work of understanding why that mattered.
But first, the model had to generate text that sounded human. Which required compute. Which required money. Which required VCs. Who required a demo.
A circle. Perfect. Unbreakable.
Ethan closed the laptop. Opened it. Closed it again. Went to the window.
San Francisco at night was a grid of light — streetlamps, apartment windows, the distant glow of the Financial District. Somewhere out there, in a house in Palo Alto, Richard Hendricks was probably debugging Pied Piper's compression algorithm, riding the high of the Disrupt win, fielding calls from VCs who suddenly wanted to fund the thing they'd ignored for months.
Richard had something Ethan didn't: a demo that worked. A product people could see. A number — 5.2 — that even non-technical observers understood as significant.
Ethan had a training loss curve dropping from 11 to 8 on a toy model, a bank account hemorrhaging cash, and a pitch deck that couldn't explain itself.
He needed that first meeting. David Park — the associate at Basecamp Ventures, the dead man's unanswered contact. A lukewarm lead. A nobody at a nobody fund. But a foot in the door was a foot in the door.
He opened the laptop one final time. Found David Park's email thread. Hit reply.
David — sorry for the late response. Been heads-down on a major pivot. Would love to grab that coffee if you're still free. I'm working on something in AI that I think you'll find interesting. Free this week?
Send.
The training run ticked over to $203.00. Loss: 7.89.
Author's Note / Promotion: Your Reviews and Power Stones are the best way to show support. They help me know what you're enjoying and bring in new readers! You don't have to. Get instant access to more content by supporting me on Patreon. I have three options so you can pick how far ahead you want to be: 🪙 Silver Tier ($6): Read 10 chapters ahead of the public site. 👑 Gold Tier ($9): Get 15-20 chapters ahead of the public site. 💎 Platinum Tier ($15): The ultimate experience. Get new chapters the second I finish them . No waiting for weekly drops, just pure, instant access. Your support helps me write more . 👉 Find it all at patreon.com/fanficwriter1
