An independent research laboratory building systematic, data-driven edge at the intersection of quantitative finance, machine learning, and AI-powered execution.
End-to-end systematic research, from raw market data to live automated execution — with rigorous statistical validation at every step.
Classical systematic strategies grounded in market microstructure, statistical arbitrage, momentum, mean-reversion, and factor-based models. Hypothesis-first, theory-driven.
Predictive modeling using XGBoost, Random Forests, and ensemble methods with walk-forward cross-validation, Optuna hyperparameter tuning, and MLflow experiment tracking.
Sequence modeling with LSTM and Transformer architectures for temporal pattern extraction, alongside CNNs for chart pattern recognition and ANN-based regime classifiers.
Fully automated rule-based execution engines deployed on cloud infrastructure with real-time signal generation, position management, and broker API integration.
ML-driven bots that adapt signal thresholds, position sizing, and entry/exit logic based on learned market regimes — moving beyond static rule execution into adaptive intelligence.
Large language model integration for reasoning about macro context, news events, and unstructured data — feeding structured signal into systematic execution pipelines.
NLP pipelines processing financial news, earnings transcripts, regulatory filings, and social data to quantify market sentiment as a tradeable alternative data signal.
End-to-end pipeline automation — data ingestion, feature computation, signal generation, risk checks, order routing, and monitoring — with zero manual intervention required.
Research infrastructure engineering: backtesting frameworks, data pipelines, QuantStats analytics, risk dashboards, and deployment architecture for institutional-grade execution.
Systematic, reproducible research from market observation to live deployment. No shortcuts, no p-hacking, no lookahead bias.
Define the tradeable universe. Ingest, clean, and store high-integrity OHLCV data with proper handling of splits, dividends, and corporate actions.
Return distributions, autocorrelation structure, volatility clustering, intraday patterns, and market regime identification before any model is touched.
Every strategy begins as a testable hypothesis about market behavior — not a model search. Theory first, ML last. Signal quality over model complexity.
Signals + position sizing + transaction costs + slippage + execution delay. No backtest is meaningful without all five components accounted for.
Walk-forward testing, permutation tests, Sharpe t-statistics, and overfitting gap analysis comparing in-sample vs. out-of-sample performance.
Live execution on cloud infrastructure with real-time P&L tracking, drawdown alerts, automated position management, and continuous performance monitoring.
From raw data to deployed capital — the complete research and engineering stack in one lab.
Former Senior Software Engineer turned independent quantitative researcher. Building systematic, ML-driven trading strategies at the intersection of software engineering and financial research.
Tarantula Research Labs is an independent quantitative research laboratory founded on a single principle: systematic edge is built through rigorous process, not intuition.
We bring software engineering discipline to quantitative finance — clean pipelines, reproducible research, and institutional-grade validation methodology applied to systematic strategy development.
"ML is the final layer, not the starting point. High-quality signals built on market understanding consistently outperform deep nets trained on noise."
The lab's research spans classical systematic strategies to deep learning sequence models and LLM-powered market intelligence — always grounded in statistical validity and real-world executability.
Open to collaboration with researchers, quant firms, and technologists. Whether you're exploring a strategy, building infrastructure, or looking for ML-quant capability — reach out.