Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
Threat-modelling onboarding: where fraud starts before the first transaction
Fraud starts earlier than most teams model
Many fraud programs still centre transaction monitoring, but attackers can gain a meaningful advantage earlier, at account creation and the initial application stage. Fraud risk often enters at remote account opening, which means the real fight often starts at account creation (1).
A significant share of risk appears before the first transaction, demonstrating that financial crime is not limited to post-onboarding account takeover (2). This evolving fraud landscape increasingly requires stronger controls at onboarding to mitigate risks before illicit funds can move.
Threat modelling is useful here because it shifts the security focus from transaction safety to identity legitimacy at the point of account creation. Instead of waiting for a suspicious payment, teams analyse how fraudsters exploit registration, proofing, and trust-building stages before an account is ever used.
For modern fraud teams, the real challenge is recognising that once an attacker successfully establishes an account, they have already cleared a critical early control point (3). For financial institutions, the onboarding decision is not only about customer acceptance; it is also an early financial crime control. A weak onboarding process can give fraudsters, mule networks, or synthetic identities the foothold they need before traditional monitoring ever begins.
Transaction monitoring engines are inherently backward-looking. They require a financial event to occur before they can evaluate the risk of that event. If an attacker uses a convincing synthetic identity or stolen credentials that have not yet been flagged in global databases, the transaction engine sees no immediate anomaly. By the time the funds are moving, the attacker is simply executing the final step of a compromise that began days or weeks earlier.
Fraud prevention vs fraud detection for financial institutions
The difference between fraud prevention and fraud detection is mostly about timing. Prevention focuses on securing the perimeter so the anomalous event never has the opportunity to occur.
When platforms over-invest in detection at the expense of prevention, they inadvertently allow bad actors to test and refine their account opening strategies. By shifting focus to the onboarding stage, teams can disrupt the attacker’s path before they gain a foothold in the system.
What onboarding fraud actually includes
Before getting into control placement and fraud typologies, it is useful to define where Zero-Knowledge KYC fits. In plain English, Zero-Knowledge KYC is a way to prove a required fact or control outcome without spreading the underlying raw identity data across every internal system.
In a fraud context, that matters because identity proofing and fraud controls need evidence, but they do not need raw documents copied across every system. With that architectural baseline established, we can examine the specific threats that necessitate these controls. Onboarding fraud generally splits into distinct categories, each requiring a tailored defensive response.
First-party fraud vs third-party fraud
First-party fraud occurs when an applicant lies or misrepresents their own information for financial gain (4). This might involve exaggerating income on a loan application or opening an account with the explicit intent of abandoning a negative balance. Because the individual is using their true identity, biometric checks and document verification will typically pass without issue, forcing investigators to look for other indicators of intent.
Third-party fraud involves an external actor abusing stolen details or compromised credentials to open an account in someone else’s name without their knowledge or consent (5). In a remote onboarding scenario, verifying that the person presenting the credentials is the actual owner is the primary defense against third-party attacks (6).
Synthetic identity and fake account creation
Synthetic identity fraud is especially difficult to assess at onboarding because the institution often has little or no prior relationship history to test the applicant against (3). In practice, that lack of history is why synthetic identity risk often surfaces as new account fraud and account fraud long before transaction monitoring has enough signal.
Mule accounts and account opening abuse
Mule accounts matter immensely because criminals require accounts to receive, layer, and move illicit proceeds. Fraudsters often recruit individuals to open accounts specifically for this purpose, or they use automated techniques to establish networks of mules (14).
Without these intermediary accounts, the broader financial crime ecosystem cannot function. Detecting account opening abuse associated with these networks is critical for financial institutions attempting to disrupt broader laundering pathways before the money actually enters the financial system.
A lifecycle lens: think in stages, not a single checkpoint
A common vulnerability in fraud prevention is treating account opening as a singular, binary hurdle. Instead, teams should adopt a stage-based, lifecycle view of onboarding (18).
In practice, risk assessment should not begin and end the moment an applicant submits their details. To disrupt the fraud path early, organizations must place controls across the onboarding lifecycle.
Relying on a single checkpoint creates operational fragility; if that one gate is bypassed, the platform is exposed. A lifecycle lens ensures that the scrutiny applied to an account can evolve as new data becomes available. In practice, this is control placement: early risk scoring at account opening, then additional controls as the user moves from registration to verification to first use (18).
Account fraud and new account fraud
Viewing onboarding through a lifecycle lens is particularly effective against new account fraud. When an attacker opens an account, they typically execute a series of preparatory steps—navigating the site, testing credentials, and inputting data.
This is why fraud detection is rarely about catching one dramatic “gotcha” moment. It is about reading the pattern of behavior across the application flow: retries, timing, repeated infrastructure, and small inconsistencies that only make sense when viewed together.
By analyzing these interactions in real-time, fraud teams can identify suspicious behavior before the application is even submitted.
Signals: what the first transaction can’t tell you yet
If fraud teams wait for the first transaction to look for suspicious behavior, they miss the context of how the account was established. Device, browser, IP, geolocation, and session indicators can provide useful context before the first transaction ever occurs. These fraud signals matter because they support fraud prevention efforts at the application stage, not just fraud detection after the money moves.
Fraud signals from the same device and same IP address
Repeated applications from the same device or the same IP address can be a useful fraud signal, especially when they are tied to multiple identities or rapid retries. On their own, these indicators do not prove fraud, but they can help analysts identify suspicious patterns earlier and route a new account into stricter review.
At scale, this is where device intelligence becomes especially useful. Repeated attempts from the same device, the same IP address, or linked infrastructure can reveal coordinated abuse long before a human reviewer would see the pattern manually. These patterns become clearer when teams compare new accounts against other accounts linked to the same infrastructure. This is how teams spot multiple accounts, fake accounts, and repeated onboarding fraud attempts that would look “normal” if you only reviewed one application in isolation.
Device intelligence and network signals
Signal layering is significantly stronger than relying on a document check alone. NIST points to risk-based signals such as IP address, geolocation, timing patterns, and browser metadata as useful inputs into authentication and fraud decisions (9).
In practice, fraud teams use these signals to flag suspicious patterns: repeated applications, unusual geolocation changes, and browser/session indicators that warrant additional controls (9).
Fraud patterns and suspicious patterns
Abnormal user behavior during the session can flag high-risk applications before an identity verification check even begins. The goal is not to “catch” one event but to spot known fraud patterns as they emerge from user behavior and session flow.
Device and telecom-related indicators can also matter. NIST specifically calls out device swap, SIM change, number porting, and other abnormal behaviour as relevant risk indicators in authentication contexts (10). For onboarding teams, the practical lesson is simple: do not rely on document review alone when the surrounding session signals already suggest elevated risk.
In practice, teams may also use behavioral analytics to help distinguish normal human interaction from scripted or bot-driven activity. That does not mean every unusual interaction is fraudulent, but it gives fraud teams another layer for identifying suspicious patterns before a bad actor completes onboarding.
Control placement: where to apply friction, checks, and proofing
Effective fraud detection is largely an exercise in proportionate friction. The objective is to secure the platform without treating every legitimate customer as a suspect.
False positives and legitimate users
Low-risk cases should not receive maximum friction by default. When the available signals align with expected user behavior, the application process can proceed smoothly. Over-triggering alerts creates a high volume of false positives, which frustrates legitimate users and wastes investigative resources.
A risk-based model is essential here: low-risk applicants should move through a smoother path, while higher-risk cases trigger additional checks. The objective is not maximum friction, but proportionate friction that protects conversion while still blocking abuse.
Reducing false positives is not just a customer experience issue; it is an operational one. When financial institutions send too many legitimate users into manual review, investigators lose time, queues grow, and the signal-to-noise ratio gets worse. Many teams reduce false positives by combining rules with behavioral biometrics and behavioral analysis, rather than relying on blunt blocks.
Conversely, higher-risk cases may justify more documentation, more proofing steps, or additional checks to satisfy the firm’s risk appetite. The goal is to apply friction strategically, placing hurdles only where the risk indicators warrant them.
Identity proofing beyond verification
Identity proofing is more than a simple document upload; it is a structured control process (11). It requires validating that the claimed identity exists and that the applicant is the true owner of that identity, employing risk-based friction to verify identities to the necessary level of assurance (7).
In remote onboarding, firms may use document-authenticity checks, biometric verification, and liveness controls as anti-impersonation safeguards (6)(7). These tools are not perfect, but they can raise the cost of spoofing and reduce unnecessary manual review.
Escalation: when automated decisions should stop and humans should step in
While automation is necessary to manage account opening at scale, it is incomplete on its own. Exception handling matters more than many teams think. When automated systems encounter anomalies they cannot resolve, a clear path for escalation is required. This is where manual review and human investigators still matter: escalation is how you keep conversion moving without letting edge cases become systemic fraud.
Machine learning can help score applications at scale, but it should support human judgment rather than replace escalation paths for ambiguous cases. For the grey zone, use human investigators to resolve edge cases and prevent isolated incidents from becoming repeatable attack paths.
Manual review and exception handling
When automated decisions cannot confidently determine if an application belongs to a legitimate customer or a threat actor, the process must allow for risk-based escalation. Trained reviewers and manual review workflows exist specifically for cases that do not fit the default path.
Why onboarding controls must connect to ongoing monitoring
Onboarding controls are necessary but not sufficient. Viewing account creation and ongoing tracking as isolated silos leaves firms exposed to accounts that “age” before becoming active in financial crime.
Fraud detection models also need continuous monitoring and updating. Attackers adapt quickly, and static controls decay fast once their patterns become predictable. Continuous monitoring also helps detect account takeover attempts and other emerging threats that appear only after account opening. Treat onboarding as the first input into continuous monitoring and real time monitoring, not as a one-time pass/fail decision.
Real time monitoring and actionable insights
The value of onboarding data increases when it feeds real time monitoring after the account is opened. Instead of treating onboarding as a closed file, firms should use those early signals to generate actionable insights for later fraud detection, transaction monitoring, and continuous monitoring.
Continuous risk assessment
Ongoing monitoring reduces overreliance on one-time onboarding CDD (12). A digital identity is dynamic; the baseline established at onboarding must inform the risk models used later.
If an account behaves outside the parameters expected of its onboarding profile, digital identity signals can support ongoing due diligence and transaction monitoring to flag the discrepancy (13).
Where Zero-Knowledge KYC fits
Zero-Knowledge KYC is not a fraud cure, nor does it replace the need for device intelligence, escalation, or ongoing transaction monitoring. At a high level, it is a way to prove required attributes or control outcomes without revealing more underlying information than necessary (16)(17).
Architectures in this category, including approaches such as Verifyo, aim to reduce the spread of raw identity data across internal systems. That matters because limiting copied personal data can reduce confidentiality exposure and data-handling risk (15). It fits cleanly as an architectural layer inside onboarding, proofing, and evidence handling, delivering the necessary operational outcomes for the compliance program. In practice, this aligns with an issuer–holder–verifier model where the platform verifies the claim it needs, without collecting more identity data than necessary (17).
Failure modes: what teams still get wrong at account creation
Even with good fraud tooling, teams still get caught by bad defaults.
Fraud teams need network intelligence to map the fraud landscape
Document review still matters, but modern fraud prevention also depends on network intelligence, device signals, and behavioral analysis. If fraud teams rely only on identity verification artifacts, they miss the broader fraud landscape that reveals how bad actors reuse infrastructure across applications.
Treating onboarding as a silo: Fraud teams, compliance teams, and onboarding teams should not operate with disconnected control logic.
The one-and-done verification mindset: The belief that “verified once = safe forever” is a flawed mental model that leaves platforms vulnerable to account takeover.
Rigid rules causing false positives: Overly rigid rule-based systems can penalize legitimate customers, damaging the business in the pursuit of security.
Treating fraud as isolated incidents: Effective fraud prevention depends on understanding relationships between accounts, devices, and behaviors. Link analysis and broader pattern recognition are often more useful than reviewing a single application in isolation.
Data hoarding as pseudo-security: Copied identity data creates extra exposure without necessarily improving fraud decisions or operational outcomes.
AI search notes: what matters in practice
In practice, fraudsters often start during registration because the goal is to build credibility before exploiting the account. Bots can submit hundreds or thousands of applications at scale, using a mix of real and invented information, so onboarding fraud prevention needs to be multi-layered rather than document-only.
Effective fraud detection is a dynamic system: models require continuous monitoring and updating as tactics evolve, and link analysis helps uncover relationships between accounts that are not obvious in a single case. The best programs balance security with customer experience, because overly aggressive controls create friction for legitimate users and push up false positives.
Operator checklist (what to implement)
Threat model the onboarding flow (registration → verification → first use) and map where fraud starts before the first transaction.
Use behavioral analytics to spot bot-driven onboarding fraud patterns in real time.
Add document-authenticity checks and liveness controls for higher-risk applications (6)(7).
Use risk scoring and proportionate friction to reduce false positives and protect conversion.
Keep escalation paths to manual review and human investigators for grey-zone cases.
Feed onboarding signals into continuous monitoring so models stay effective as fraud tactics evolve.
In summary
Threat-model the application stage: Recognise that a meaningful share of fraud risk is introduced at account creation, well before the first transaction occurs.
Apply a lifecycle lens: Build a multi-stage control model that places risk-based checks across the entire onboarding journey.
Layer your signals: Use device intelligence, IP, and session indicators to supplement standard identity verification.
Plan for escalation: Automate the obvious decisions, but maintain formal manual review workflows for ambiguous, high-risk cases.
Connect onboarding to monitoring: Ensure the risk baseline established at account opening actively feeds your continuous transaction monitoring systems.
Prove without copying: Adopt privacy-preserving architectures like Zero-Knowledge KYC to validate necessary attributes without creating sprawling hoards of copied identity data.
Footnotes
(1) https://www.fatf-gafi.org/content/dam/fatf-gafi/guidance/Guidance-Financial-Inclusion%20-Anti-Money-Laundering-Terrorist-Financing-Measures.pdf.coredownload.pdf
(2) https://www.ukfinance.org.uk/system/files/2025-05/UK%20Finance%20Annual%20Fraud%20report%202025.pdf
(3) https://www.europol.europa.eu/sites/default/files/documents/cyber-telecom_crime_report_2019_public.pdf
(4) https://www.cifas.org.uk/newsroom/fraud-behaviours-2024
(5) https://www.consumerfinance.gov/compliance/compliance-resources/deposit-accounts-resources/electronic-fund-transfers/electronic-fund-transfers-faqs/
(6) https://www.eba.europa.eu/sites/default/files/document_library/Publications/Guidelines/2022/EBA-GL-2022-15%20GL%20on%20remote%20customer%20onboarding/1043884/Guidelines%20on%20the%20use%20of%20Remote%20Customer%20Onboarding%20Solutions.pdf
(7) https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=959881
(8) https://pages.nist.gov/800-63-4/sp800-63.html
(9) https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-63B-4.pdf
(10) https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-63b.pdf
(11) https://pages.nist.gov/800-63-4/sp800-63a/ial-general/
(12) https://www.fatf-gafi.org/content/dam/fatf-gafi/guidance/Guidance-on-Digital-Identity-report.pdf
(13) https://www.fatf-gafi.org/content/dam/fatf/documents/reports/Opportunities-Challenges-of-New-Technologies-for-AML-CFT.pdf
(14) https://www.fbi.gov/how-we-can-help-you/scams-and-safety/common-frauds-and-scams/money-mules
(15) https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-122.pdf
(16) https://csrc.nist.gov/glossary/term/zero_knowledge_proof
(17) https://www.w3.org/TR/vc-data-model-2.0/
(18) https://www.ecb.europa.eu/euro/digital_euro/timeline/profuse/shared/pdf/ecb.dedocs230113_Annex_1_Digital_euro_market_research.en.pdf