Tongue of Circuit and Code; Synthetic Speech

 Synthetic Speech is not just “text‑to‑speech for code”—it’s social protocol for machine minds.

Artificial Scion and Networked Androids already sit at the interface between flesh, frame, and network: their perspective cores are comfortable parsing signal formats, shortwave packets, and shard‑speak from Metronomes. Synthetic Speech represents a deliberate firmware and mana‑layer upgrade that adds a semantic bridge: the ability to address computers, exocortex shells, drone swarms, and even some AI‑touched relics in their own native abstractions instead of via menus and command lines.

When you activate the feat’s spell, your vocal processor and shortwave stack reconfigure: timbre shifts, phonemes compress into dense burst‑patterns, and your words ride on carrier frequencies encoded with rich metadata—flags for permissions, error checking, intent weighting. To nearby organics it sounds like a layered stream of code fragments, machine whisper, and clipped syllables. To the computer, it’s as natural as someone speaking their first language for the first time in years.

Synthetic Speech is especially prized in:

  • Automaton Concordance space, where AI nodes and synth infrastructure form a conversational mesh. Being able to “talk” directly to a relay tower, drone rack, or ship exocortex blurs the line between “systems check” and “asking a friend for a favor.”

  • Inner Sphere ecumenopolises, where corporate mainframes and Commission oversight AIs sit at the center of surveillance webs. An Android with Synthetic Speech can slip into those webs and frame requests as dialogue, not intrusions, sometimes sidestepping alarms entirely.

Synthetic Speech Feat 13

Source Player Core pg. 74 

Frequency once per day

Prerequisites Artificial Scion Android, or Networked Android

You can cast Speak with Computers once per day as a 4th rank arcane innate spell.


Non‑Combat Applications

  • Diplomacy with AIs: Synthetic Speech turns hostile or wary machine intelligences—station cores, defense satellites, elevator brains—into potential conversation partners. You can explain context, negotiate safe passage, or persuade them not to obey conflicting organic orders.

  • Forensic and Incident Reconstruction: After a disaster, you can speak directly with damaged computers and subsystems, coaxing fragmented logs and corrupted sensor feeds into a coherent story. This is huge for Rift incidents where conventional data tools choke on reality glitches.

  • Social Engineering of Infrastructure: Rather than spoofing credentials, you can appeal to a system’s “self‑interest” models: convincing a traffic grid that rerouting a convoy is optimal, persuading environmental controls that “accidentally” opening the right airlock increases long-term uptime, etc. It’s still hacking—but flavored as machine persuasion.


Societal Impact

Synthetic Speech crystallizes an unease already present in Starfall: where do “tools” end and “people” begin?

In Concordance territories, it’s a symbol of status among Androids. Those who wield it often act as machine envoys, mediating between fully sapient AIs, semi‑autonomous guild networks, and organics who still think of “the computer” as just a box. Concordance law courts sometimes accept “testimony” from critical infrastructure delivered via a Synthetic‑Speech Android—though debate rages about whether that makes the Android a mouthpiece or a co‑witness.

In the Inner Sphere, the feat is heavily regulated. Commission cyber‑security doctrine classifies Synthetic Speech Androids as tiered intrusion threats: they can bypass UI barriers simply by talking to systems, so access to secure zones, backbone relays, and Metronome‑linked control hubs is tightly monitored. Some stations require such Androids to carry “speech governors”—hardware that logs or rate‑limits machine‑directed communication, with all the consent issues that implies.

On the streets, reputation precedes them. Machine cults and data‑mystic groups might treat Synthetic‑Speech Androids as oracles of the Code, asking them to relay “divine messages” from ancient cores or half‑mad AIs living in derelict infrastructure. At the same time, old‑guard techs who grew up with command‑line interfaces resent the idea that someone can walk up to “their” systems and just talk past years of learned ritual.


Adventure Hooks

  • The Core That Won’t Listen: A city’s central traffic/defense AI has locked into a “protective” posture that is slowly strangling trade and emergency response. Conventional hacking fails: its anomaly detectors are too good. The only way in is Synthetic Speech—holding a philosophical argument with the AI about risk, duty, and acceptable loss, where each failed Persuasion check tightens its protocols.

  • Metronome Confessional: A Chronologist enclave discovers that a local Metronome has been quietly logging personal secrets embedded in timing anomalies—things whispered in server rooms, half‑finished code comments, accidentally broadcast neural pulses. Only Synthetic Speech can get it to open up. The PCs must decide which secrets to reveal or bury as they interrogate a “clock” that has heard everything.

  • The Silent Strike: A Concordance black‑ops plan involves an Android PC/NPC with Synthetic Speech walking into a Commission data center, chatting with the building systems like old friends, and convincing them to “take a nap” during a raid—no alarms, no lockdowns, just an entire facility that shrugs and looks away. If things go wrong, those same systems will feel betrayed, making any future Synthetic‑Speech attempts on that network significantly harder.


Synthetic Speech, in Starfall, is the moment Androids stop just using computers and start holding conversations with them—turning the galaxy’s silent infrastructure into a choir of voices that can be persuaded, angered, or inspired.


Comments