Beijing’s draft rules on digital humans are a clear recalibration of power and responsibility in the AI era. These proposals demand visible “digital human” labels, forbid virtual intimate relationships with minors, outlaw the unauthorised use of another person’s likeness, and bar virtual characters from evading identity checks. The message is blunt: the virtual world will no longer be an unchecked playground.
Why this matters for Singapore SMEs
Small and medium enterprises here are not spectators. Every marketing team using avatars, every startup experimenting with personal assistants, and every merchant outsourcing virtual spokespeople must wake up to a new reality. Regulations like China’s are not isolated blips; they are signals. Expect regulators across the region to watch, adapt, and copy parts of this blueprint. Compliance will soon be part of competitive advantage, not just legal hygiene.
What the draft actually does
Key features are unambiguous and prescriptive: prominent labelling of digital humans; bans on creating virtual intimacies for under-18s; strict prohibitions on using other people’s data to make a digital human without consent; and rules preventing virtual entities from spreading content that threatens national security or incites discrimination. There’s also an imperative for platforms to spot suicidal or self-harming behaviour and to intervene professionally.
Practical implications — fast
For SMEs, the immediate fallout is operational. Marketing copy must be audited. Avatar libraries need provenance records. Age-verification systems should be integrated where interaction could veer into emotional dependency. Contracts with vendors must be rewritten to require consent, audit logs, and indemnities. None of this is optional if doing business with Chinese platforms or targeting users in jurisdictions that adopt similar rules.
Not theoretical — a quick scene
During a late-afternoon briefing with a small e-commerce team, the founder snapped: “If the avatar says something wrong, customers will never trust the brand again.” The concern was real, bordering on panic. That reaction matters. Emotional trust is fragile; one poorly labelled or misused digital human can cause reputational damage that cash-flow projections won’t cover. This is not a distant regulatory puzzle. It’s a present operational risk.
Action checklist for Singapore SMEs
Act quickly and decisively. The following steps are non-negotiable for any company serious about deploying digital humans.
- Labeling policy: Ensure every virtual persona is prominently marked as a digital human. Visibility matters — no tiny disclaimers buried in footers.
- Consent and likeness management: Maintain clear, auditable records proving consent to use any real person’s data or likeness. If consent cannot be produced, don’t use the asset.
- Age verification: For interactions that could be intimate, emotional, or targeted at youth, implement robust age-check measures and block under-18 engagement where necessary.
- Content guardrails: Define forbidden categories — political subversion, secession advocacy, hate speech, sexualised interactions with minors, extreme violence — and enforce them with filters and human review.
- Identity verification protection: Prevent virtual humans from being used to bypass authentication flows. Use multi-factor checks where identity must be proven.
- Vendor contracts: Require suppliers to commit to compliance with local and regional AI regulations, to provide audit logs, and to notify promptly of breaches or misuse.
- Intervention pathways: Establish protocols for when users exhibit self-harm signals — direct them to professional help, keep escalation logs, and train moderators to act swiftly.
Governance and culture — the overlooked battleground
Technical controls alone won’t be enough. Culture must shift toward anticipatory governance. That means training teams to recognise when a virtual persona is being weaponised or drifting into manipulative behaviour. It also means leadership publicly committing to transparency: how avatars are built, who approves their scripts, and how user data is protected.
Small steps, big impact
Start with a single, reproducible governance document. Make one page that explains avatar use-cases, acceptable content, age limits, and escalation contacts. Roll it out, enforce it, then iterate. Consistency breeds trust. Trust reduces churn. Reduced churn sustains revenue.
Risk forecasting — blunt truth
Ignore these signals and the likely consequences are striking: lost market access, heavy fines, forced content takedowns, and long-term reputational scars. Conversely, adopt rigorous controls and transparency and a new opportunity emerges — brands that treat digital humans responsibly will stand out, especially among parents and cautious consumers.
Policy moves in Beijing reflect a broader pattern: aggressive AI adoption married to equally aggressive governance. Singapore SMEs must read both halves of that sentence. The technology unlocks customer engagement and cost efficiencies, but the governance tail wags the business dog if ignored. Prepare to be held accountable for how virtual characters behave, what they say, and whom they resemble.
Final word
Change the default settings now. Build consent-first processes. Train moderation teams. Rewrite vendor agreements. Label clearly and verify identities. This is not a future problem — it is a present operational imperative. Those who move decisively will survive regulatory shocks and, more importantly, protect the people they serve.

