About Me 你好!这里是一个代码与赛车并轨飞驰的世界。
我是 King ,一名计算机专业的大一学生,同时也是一个血液里流淌着汽油与二进制代码的F1狂热分子。
对我来说,计算机科学与F1赛车是同一枚硬币的两面:它们都是人类智慧在极限边界上的舞蹈。我痴迷于代码从逻辑到创造的魔法,也同样为赛道上毫秒间的策略博弈与空气动力学奇迹而心跳加速。
这个博客,就是我个人的 “数据记录仪” 和 “开发日志”。我会在这里:
「调试日志」:记录CS和大学学习生活中的“坑”与顿悟时刻,分享有价值的技术笔记。
「遥测分析」:解读F1赛场的技术风云、策略棋局与商业逻辑。
「架构前瞻」:探讨像AI、航天、消费电子等如何重塑我们生活的底层架构。
我不是专家,我是一个充满好奇的探索者。我相信,理解赛车如何过弯,能让我写出更高效的代码;理解芯片如何设计,也能让我更看懂车队的战略局限;理解星舰的设计思路,也能让我懂得如何挑战工程的极限。
我相信,编写优雅的代码与设计完美的赛车线路,追求的都是最优解;科技发布会与赛车进站,上演的都是精心编排的巅峰协作。我渴望探寻这些领域底层共通的逻辑与美感。
未来,希望这里能成为一个车库,一个实验室,也是我们这些好奇灵魂的维修区通道。
欢迎你,与我一同进入这个充满逻辑、速度与想象力的世界。系好安全带,我们的思维实验,现在发车。
这是我最喜欢的车手之一的维斯塔潘 这是我最喜欢的车手之一的皮亚斯特里 我最喜欢的公式:麦克斯韦方程组
我最喜欢的算法:蒙特卡洛模拟
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 import randomimport numpy as npfrom typing import Callable , List , Tuple import timeimport statisticsclass MonteCarloSimulator : """ 通用的蒙特卡洛模拟器类 """ def __init__ (self, seed=None ): """初始化模拟器""" if seed is not None : random.seed(seed) np.random.seed(seed) def estimate_value (self, sampler: Callable , evaluator: Callable , num_samples: int = 10000 , verbose: bool = False ) -> Tuple [float , dict ]: """ 通用的蒙特卡洛估值 参数: sampler: 采样函数,每次调用返回一个样本 evaluator: 评估函数,输入样本返回估值 num_samples: 采样数量 verbose: 是否显示详细过程 返回: (估计值, 统计信息字典) """ values = [] start_time = time.time() for i in range (num_samples): sample = sampler() value = evaluator(sample) values.append(value) if verbose and i % (num_samples // 10 ) == 0 and i > 0 : progress = i / num_samples * 100 print (f"进度: {progress:.1 f} %" ) estimate = sum (values) / num_samples stats = { 'mean' : estimate, 'std' : statistics.stdev(values) if len (values) > 1 else 0 , 'min' : min (values), 'max' : max (values), 'samples' : num_samples, 'time' : time.time() - start_time, 'values' : values if len (values) <= 1000 else values[:1000 ] } if len (values) > 1 : std_error = stats['std' ] / np.sqrt(num_samples) stats['std_error' ] = std_error stats['confidence_95' ] = (estimate - 1.96 * std_error, estimate + 1.96 * std_error) stats['confidence_99' ] = (estimate - 2.576 * std_error, estimate + 2.576 * std_error) return estimate, stats def convergence_analysis (self, sampler: Callable , evaluator: Callable , max_samples: int = 100000 , checkpoints: List [int ] = None ) -> dict : """ 收敛性分析:观察估计值如何随样本数增加而收敛 返回包含不同样本数下结果的字典 """ if checkpoints is None : checkpoints = [100 , 500 , 1000 , 5000 , 10000 , 50000 , 100000 ] checkpoints = [c for c in checkpoints if c <= max_samples] results = {} print ("进行收敛性分析..." ) for n in checkpoints: estimate, stats = self .estimate_value(sampler, evaluator, n) results[n] = { 'estimate' : estimate, 'std_error' : stats.get('std_error' , 0 ) } print (f" 样本数 {n:7d} : 估计值 = {estimate:.6 f} (±{stats.get('std_error' , 0 ):.6 f} )" ) return results if __name__ == "__main__" : print ("=== 通用蒙特卡洛框架使用示例 ===" ) simulator = MonteCarloSimulator(seed=42 ) print ("\n1. 估算π值:" ) def pi_sampler (): return (random.random(), random.random()) def pi_evaluator (point ): x, y = point return 1 if x**2 + y**2 <= 1 else 0 pi_estimate, pi_stats = simulator.estimate_value(pi_sampler, pi_evaluator, 100000 ) print (f" 估算结果: {4 * pi_estimate:.6 f} (真实值: {np.pi:.6 f} )" ) print (f" 标准误差: {4 * pi_stats['std_error' ]:.6 f} " ) print (f" 95%置信区间: ({4 * pi_stats['confidence_95' ][0 ]:.6 f} , " f"{4 * pi_stats['confidence_95' ][1 ]:.6 f} )" ) print ("\n2. 估算复杂积分 ∫(0到1) sin(x^2) dx:" ) def complex_sampler (): return random.random() def complex_evaluator (x ): return np.sin(x**2 ) integral_estimate, integral_stats = simulator.estimate_value( complex_sampler, complex_evaluator, 50000 ) print (f" 估算结果: {integral_estimate:.6 f} " ) print (f" 标准误差: {integral_stats['std_error' ]:.6 f} " ) print ("\n3. 收敛性分析示例:" ) convergence_results = simulator.convergence_analysis( pi_sampler, pi_evaluator, max_samples=50000 )
Hello! Welcome to a world where code and racing converge at full speed.
I’m King , a first-year computer science student and an F1 enthusiast whose veins run with gasoline and binary code.
To me, computer science and Formula 1 are two sides of the same coin—both are a dance of human intellect at the edge of possibility. I’m captivated by the magic that turns logic into creation through code, just as I’m thrilled by the split-second strategy and aerodynamic wonders on the racetrack.
This blog is my personal data recorder and development log . Here, I’ll share:
Debugging Notes : Recording pitfalls and “aha!” moments in my CS journey and my college life, along with useful technical insights.Telemetry Analysis : Breaking down the tech, strategy, and business logic behind the scenes in F1.Architecture Outlook : Exploring how fields like AI, aerospace, and consumer electronics are reshaping the foundations of our world.I’m not an expert—I’m a curious explorer. I believe understanding how a racing car takes a corner can help me write more efficient code; understanding how a chip is designed can deepen my view of a team’s strategic limits; and understanding the engineering behind Starship can inspire me to push boundaries.
To me, writing elegant code and designing the perfect racing line both seek the optimal solution; a tech keynote and a pit stop are both performances of meticulously orchestrated collaboration. I’m drawn to the underlying logic and beauty common across these fields.
I hope this space becomes a garage , a lab , and a pit lane for curious minds like ours.
Welcome. Strap in—our thought experiment is about to begin.
这是我最喜欢的车手之一的维斯塔潘 这是我最喜欢的车手之一的皮亚斯特里
My favourite formula: Maxwell’s equations
My favourite algorithm: Monte Carlo Simulation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 import randomimport numpy as npfrom typing import Callable , List , Tuple import timeimport statisticsclass MonteCarloSimulator : """ General Monte Carlo simulator class """ def __init__ (self, seed=None ): """Initialize simulator""" if seed is not None : random.seed(seed) np.random.seed(seed) def estimate_value (self, sampler: Callable , evaluator: Callable , num_samples: int = 10000 , verbose: bool = False ) -> Tuple [float , dict ]: """ General Monte Carlo value estimation Parameters: sampler: Sampling function, returns one sample per call evaluator: Evaluation function, returns value for given sample num_samples: Number of samples verbose: Whether to show detailed process Returns: (Estimated value, Statistics dictionary) """ values = [] start_time = time.time() for i in range (num_samples): sample = sampler() value = evaluator(sample) values.append(value) if verbose and i % (num_samples // 10 ) == 0 and i > 0 : progress = i / num_samples * 100 print (f"Progress: {progress:.1 f} %" ) estimate = sum (values) / num_samples stats = { 'mean' : estimate, 'std' : statistics.stdev(values) if len (values) > 1 else 0 , 'min' : min (values), 'max' : max (values), 'samples' : num_samples, 'time' : time.time() - start_time, 'values' : values if len (values) <= 1000 else values[:1000 ] } if len (values) > 1 : std_error = stats['std' ] / np.sqrt(num_samples) stats['std_error' ] = std_error stats['confidence_95' ] = (estimate - 1.96 * std_error, estimate + 1.96 * std_error) stats['confidence_99' ] = (estimate - 2.576 * std_error, estimate + 2.576 * std_error) return estimate, stats def convergence_analysis (self, sampler: Callable , evaluator: Callable , max_samples: int = 100000 , checkpoints: List [int ] = None ) -> dict : """ Convergence analysis: Observe how estimate converges with increasing samples Returns dictionary with results at different sample sizes """ if checkpoints is None : checkpoints = [100 , 500 , 1000 , 5000 , 10000 , 50000 , 100000 ] checkpoints = [c for c in checkpoints if c <= max_samples] results = {} print ("Performing convergence analysis..." ) for n in checkpoints: estimate, stats = self .estimate_value(sampler, evaluator, n) results[n] = { 'estimate' : estimate, 'std_error' : stats.get('std_error' , 0 ) } print (f" Samples {n:7d} : Estimate = {estimate:.6 f} (±{stats.get('std_error' , 0 ):.6 f} )" ) return results if __name__ == "__main__" : print ("=== General Monte Carlo Framework Example ===" ) simulator = MonteCarloSimulator(seed=42 ) print ("\n1. Estimating π:" ) def pi_sampler (): return (random.random(), random.random()) def pi_evaluator (point ): x, y = point return 1 if x**2 + y**2 <= 1 else 0 pi_estimate, pi_stats = simulator.estimate_value(pi_sampler, pi_evaluator, 100000 ) print (f" Estimate: {4 * pi_estimate:.6 f} (Actual: {np.pi:.6 f} )" ) print (f" Standard Error: {4 * pi_stats['std_error' ]:.6 f} " ) print (f" 95% Confidence Interval: ({4 * pi_stats['confidence_95' ][0 ]:.6 f} , " f"{4 * pi_stats['confidence_95' ][1 ]:.6 f} )" ) print ("\n2. Estimating complex integral ∫(0 to 1) sin(x²) dx:" ) def complex_sampler (): return random.random() def complex_evaluator (x ): return np.sin(x**2 ) integral_estimate, integral_stats = simulator.estimate_value( complex_sampler, complex_evaluator, 50000 ) print (f" Estimate: {integral_estimate:.6 f} " ) print (f" Standard Error: {integral_stats['std_error' ]:.6 f} " ) print ("\n3. Convergence Analysis Example:" ) convergence_results = simulator.convergence_analysis( pi_sampler, pi_evaluator, max_samples=50000 )