做梦笑出声是什么预兆| 大于90度的角是什么角| 银手镯变黑是什么原因| 2023年什么年| 夏季风寒感冒吃什么药| 尿酸高吃什么食物好| 高铁与动车的区别是什么| 兔子的眼睛为什么是红色的| 石榴石什么颜色的最好| 3月14日是什么日子| 关节错缝术是什么意思| 宛如是什么意思| 增肌是什么意思| 嗓子疼有痰吃什么药| 吃什么促进卵泡发育| 什么地望着| 食管炎吃什么药| 睾丸小是什么原因| 小三阳吃什么食物好得快| 指甲紫色是什么病的征兆| 11月9号是什么星座| 中性是什么意思| 小肠炖什么好吃又营养| 3a是什么| 贫血是什么原因引起的| 长寿花什么时候开花| 早泄有什么办法| 铁蛋白高是什么意思| palladium是什么牌子| 芸豆是什么豆| 根充是什么意思| 搪瓷是什么材料| 为什么会得hpv| 海底椰是什么| 千山鸟飞绝的绝是什么意思| 功成名就是什么意思| 鼻塞有脓鼻涕吃什么药| 手上起皮是什么原因| 数学专业学什么| 为什么男怕招风耳| 尿蛋白是什么病| 亚是什么意思| 尿路感染吃什么药比较好的快| 桑叶泡水喝有什么好处| 文火是什么意思| 壁虎代表什么生肖| 乙肝表面抗原高是什么意思| 48岁属什么| 上火吃什么药好| 林冲到底属什么生肖的| 梦见自己头发白了是什么意思| 带状疱疹什么引起的| 药品经营与管理学什么| 蔡司是什么| 北京大裤衩建筑叫什么| 高定是什么意思| 均质是什么意思| normal什么意思| 什么颜色加什么颜色等于紫色| 送朋友鲜花送什么花| 尿血应该挂什么科| 师范类是什么意思| 暖巢早衰是什么原因| 神的国和神的义指的是什么| 贪慕虚荣是什么意思| 一般什么人容易得甲亢| 异性朋友是什么意思| 蚊虫叮咬涂什么药| 亡羊补牢的寓意是什么| lcc是什么意思| 母亲o型父亲b型孩子是什么血型| 公安局大队长是什么级别| 脂蛋白a是什么| 办理出院手续都需要什么| 小动脉瘤是什么意思| 驾驶证和行驶证有什么区别| 什么痣不能点| 妇检tct是什么检查| n1是什么意思| 藿香正气水什么牌子的好| 脂肪肝吃什么药| 什么的星星| 看见壁虎是什么兆头| 红色裤子配什么上衣好看| tfboys什么意思| 指导员是什么级别| 家里有蜈蚣是什么原因| 生日蛋糕上写什么字比较有创意| 冬虫夏草是什么| 胃ca是什么意思| 支气管扩张是什么原因引起| 猫咪呕吐吃什么药| 兔肉不能和什么一起吃| 2月29号是什么星座| 客厅挂钟放在什么位置好| 猪头肉是什么意思| 咳嗽一直不好是什么原因怎么治| 膳是什么意思| 正常人为什么传导阻滞| 猫咪疫苗什么时候打| 小妮子是什么意思| 添丁是什么意思| 一什么沙发| 胃疼想吐是什么原因| 自负是什么意思| 大枣吃多了有什么危害| 什么蜂蜜最好| 1月21是什么星座| 2010属什么生肖| 清热去湿热颗粒有什么功效| 得宫颈癌的前兆是什么| 黑无常叫什么| 验孕棒什么时候测准确| 血色素低吃什么补得快| 多动症吃什么药| 来月经属于什么期| 红细胞计数偏高是什么意思| 胰腺炎为什么不能同房| 回声不均匀是什么意思| 吃什么才能长胖| 尿素偏高是什么原因| 心穷是什么意思| 酒酿蛋什么时候吃效果最好| 引产挂什么科| 氯化钠是什么| ellesse是什么牌子| 腰肌劳损是什么症状| 一般什么意思| 摩羯座和什么座最配| 年金是什么意思| 公元500年是什么朝代| uu解脲脲原体阳性是什么意思| 四面楚歌是什么意思| 拉屎是绿色的是什么原因| 祖字五行属什么| 什么样的莲蓬| 避孕药什么时候吃有效| 发烧不退烧是什么原因| 什么是酸| 0.5什么意思| 立秋什么意思| 副区长是什么级别| 货比三家是什么意思| 头热是什么原因| 十月份出生的是什么星座| 大枕大池是什么意思| 人体的三道防线是什么| 载脂蛋白b高是什么原因| 让姨妈推迟吃什么药| 3月30日什么星座| 妊娠是什么| 天喜星是什么意思| 成年人改名字需要什么手续| 什么是命运| 体脂率是什么| 小狗需要打什么疫苗| 什么动物眼睛最大| 嗓子疼吃什么药效果最好| 哈尔滨机场叫什么名字| 一人一口是什么字| 禾花鱼是什么鱼| 富裕是什么意思| 智商是什么意思| 帝旺是什么意思| 疱疹是什么样子的| 让平是什么意思| 为什么要抽烟| 儿童包皮过长挂什么科| 血小板低会引发什么病| 银属于五行属什么| 速度等于什么| 婴儿大便有泡沫是什么原因| 什么人容易得帕金森| 什么牌子的麦克风好用| 梵克雅宝是什么材质| 轴位是什么| 一吃就吐是什么病症| 花木兰属什么生肖| 喝小分子肽有什么好处| 暖皮适合什么颜色衣服| 输血前八项指什么| 史努比是什么品牌| 明月对什么| 骨质密度增高是什么意思| 两点是什么时辰| 掉头发多是什么原因| 乌鸡不能和什么一起吃| 湿气是什么原因造成的| 伤口感染化脓用什么药| 澈字五行属什么| 早晨起床口苦是什么原因| 梦见很多人是什么意思| 庚日是什么意思啊| 西南方是什么生肖| 指南针是什么时候发明的| 宫颈病变是什么| 不约什么什么| 肺炎吃什么药最有效| 舌头溃疡是什么原因造成的| ct平扫能检查出什么| 涵字属于五行属什么| 无创低风险是什么意思| 广州立冬吃什么| 是什么字| 肉桂和桂皮有什么区别| 手脚麻木吃什么药最管用| 为什么白头发越来越多| 高净值什么意思| 鸭吃什么食物| 男性粘液丝高什么原因| 脑血流图能检查出什么| 上头了是什么意思| 一什么阳光填量词| 丹参的功效与作用是什么| 丁克族是什么意思| 为什么喝绞股蓝会死亡| 7月一日是什么节| 什么是居间费| 面瘫吃什么药好| 结甲可能是什么意思| 排骨汤里放什么食材好| 幽门杆菌吃什么药| 什么蔬菜吸脂减肥| 什么的麦子| 食物发霉是什么菌| 猫薄荷对猫有什么作用| 暴龙眼镜什么档次| 十月二十五是什么星座| 羧甲基纤维素钠是什么| 女人熬夜吃什么抗衰老| 精忠报国是什么生肖| 毒枭是什么意思| 6岁属什么生肖| 海菜是什么| 吃什么东西能养胃| 猫咪不能吃什么| 肚脐右边疼是什么原因| sf什么意思| 玻璃体混浊用什么眼药水| 大腿麻木是什么原因| 血管变窄吃什么能改善| 罗红霉素治什么病| 255是什么意思| 感冒咳嗽挂什么科| 献完血应该注意什么| 汇报是什么意思| 县公安局局长什么级别| 年柱将星是什么意思| 殊途同归是什么意思| 三月份是什么星座的| 镜花水月什么意思| 摩羯座的幸运色是什么| 梦见找对象是什么意思| 胸上长痘痘是什么原因| wonderland是什么意思| kcal是什么单位| 哈欠是什么意思| 做梦梦见生孩子是什么意思| 苍蝇为什么喜欢往人身上飞| 鹌鹑蛋不能和什么一起吃| 去韩国需要办理什么手续| 长期熬夜有什么坏处| 男人阴虚吃什么药最好| 天铁是什么| 百度

Article

西山大话西游梦:一〇四团城郊旅游趟出突围路

author
百度 中方敦促美方尽快解决中方的关切,通过对话协商解决双方的分歧,避免对中美合作大局造成损害。

By Sujatha R

Technical Writer

  • Published: June 23, 2025
  • 12 min read

As AI advances, new demands have emerged for systems to interact with physical environments, interpret sensory input, and make dynamic decisions. While large language models (LLMs) and vision models have made progress in reasoning and perception, they require orchestration with external planners or controllers to perform interactive, goal-directed tasks. Large action models (LAMs) bridge this gap by integrating perception, planning, and control into a unified framework for agents to understand their environment, make decisions, and take physical or digital actions in real time. In this article, we explore why LAMs matter, how they operate, and the role they play across industries.

Key takeaways:

  1. LAMs combine multimodal sensory data like vision, touch, and language to understand their environment and perform meaningful, goal-directed actions.

  2. From warehouse robotics and surgical automation to game strategy and digital UI interaction, LAMs support agents to operate across physical and digital domains with adaptability and precision.

  3. While LLM agents focus on text and reasoning, LAMs extend those capabilities to decision-making, integrating language and neural networks into systems that can plan and act in the real world.

What are large action models?

LAMs are advanced artificial intelligence systems designed to help machines understand their surroundings and take meaningful physical actions in the real world. Unlike language models that focus on generating text or vision models that analyze images, LAMs integrate multiple sensory inputs, like vision, touch, spatial awareness, and language, to guide real-time decision-making and control. These models are trained from large datasets using a combination of techniques like reinforcement learning and copying examples (imitation learning). These models can plan, adapt, and act through a series of steps to get things done.

You can think of LAMs as combining the intelligence of LLMs, the creativity of generative AI, the perception of computer vision systems, and the decision-making abilities of neural networks, prepared for real-world action. For example, imagine a LAM-programmed robot in a kitchen that can understand a spoken command like “make a sandwich.” It would need to detect and identify ingredients using visual input, figure out where everything is, plan a sequence of steps, and use its arms to prepare the sandwich, all without being manually programmed for each step. While this full sequence remains a goal for future systems, models like Google’s RT-2 have demonstrated the ability to recognize objects and perform simple tasks.

Experience the power of AI and machine learning with DigitalOcean Gradient GPU Droplets. Leverage NVIDIA H100, H200, RTX 6000 Ada, L40S, and AMD MI300X GPUs to accelerate your AI/ML workloads, deep learning projects, and high-performance computing tasks with simple, flexible, and cost-effective cloud solutions.

Sign up today to access DigitalOcean Gradient GPU Droplets and scale your AI projects on demand without breaking the bank.

How does a large action model work?

LAMs operate through a sequence of steps that transform raw sensory input into structured, goal-directed actions. This process integrates perception, reasoning, and control through deep learning, planning algorithms, and motor execution systems.

Large action model image

1. Perception and encoding

LAMs begin by processing multimodal inputs such as images, depth maps, tactile feedback, and proprioceptive data. These signals are passed through convolutional neural networks (CNNs) or vision transformers (ViTs) to extract spatial and semantic features. The result is a high-dimensional embedding that captures the current state of the environment.

2. Goal interpretation

The model interprets user-provided commands or task specifications, given in natural language. It uses a language encoder, like a transformer-based model like T5 or BERT, to convert this input into a structured representation that aligns with the model’s action space and internal planning schema and action space.

??Confused between machine learning and NLP? Discover how machine learning powers decision-making while natural language processing helps machines understand and communicate with humans.

3. World modeling

LAMs construct a predictive representation of the environment using learned world models. These models simulate how the environment will evolve in response to different actions. Techniques like latent dynamics modeling or neural simulators allow the system to forecast future states and estimate the consequences of potential action sequences.

4. Action planning

The model generates a policy or action sequence using planning algorithms such as model-predictive control (MPC), hierarchical reinforcement learning (HRL), or trajectory optimization. This step involves selecting the optimal sequence of discrete or continuous actions based on predicted state transitions and task-specific reward functions.

5. Motor control execution

Low-level control signals are synthesized to execute each step in the action plan. These may include torque values, joint angles, or motion primitives, computed using inverse kinematics, PID controllers, or learned motor policies. The commands are sent to actuators or simulated agents to carry out the task in real-time.

6. Feedback and adaptation

LAMs continuously monitor execution outcomes using real-time sensor data. If deviations from expected states are detected, the model updates its internal state and replans as needed using closed-loop feedback. Adaptive components such as recurrent policies or meta-learned controllers improve resilience and flexibility in unstructured or changing environments like outdoor terrains with variable lighting and weather, or human-robot interaction spaces.

Benefits of a large action model

LAMs change how AI systems interact with and reason about the physical world. By integrating perception, planning, and control, they exhibit more adaptive, intelligent, and generalizable behavior in complex real-world scenarios.

1. Greater generalization

LAMs can handle a wide range of tasks across different environments without needing task-specific retraining. Because they are trained on diverse datasets that combine sensory inputs, motor commands, and task instructions, they learn reusable representations that allow them to generalize beyond the training distribution. This makes them highly suitable for dynamic and unstructured settings.

Google DeepMind’s Open X-Embodiment project displays LAMs trained across multiple robot platforms (e.g., robot arms, mobile robots) that generalize to new tasks and hardware configurations with minimal fine-tuning. The model achieves this by using shared action and perception representations across embodiments.

??Looking for the right approach for building your AI models? Explore the concepts of RAG, fine-tuning, and factors while choosing between RAG vs fine-tuning.

2. Real-world adaptability

LAMs integrate sensory feedback (e.g., vision, proprioception, tactile data) with adaptive control strategies, making them resilient to uncertainty, sensor noise, or physical disturbances. Their ability to replan and adapt in real time improves their calculations in complex, real-world deployments.

In a study by Miki et al. (2022), researchers trained a quadrupedal robot to walk autonomously in outdoor environments like snow-covered trails, rocky slopes, and foggy terrain. The system fused vision and proprioceptive inputs using an attention-based recurrent encoder to adapt its gait in real time.

3. Improved sample efficiency

LAMs simulate environments internally, which reduces the need for large numbers of real-world trial-and-error episodes. Through learned world models and latent dynamics, they can “imagine” multiple outcomes before acting, leading to more efficient policy learning and fewer physical interactions required.

The DreamerV3 agent uses latent world models and planning (similar to LAMs) to achieve sample-efficient learning in both simulated and real-robot environments. It outperforms model-free methods using far fewer environment steps.

?? Build your own AI assistant from scratch! Learn how to create and deploy a terminal-based ChatGPT bot using Python and OpenAI APIs.

4. Multimodal integration

LAMs combine various data types, such as visual, spatial, linguistic, and haptic, into a unified decision-making pipeline. This allows them to understand complex instructions and contextual cues while mapping them to precise physical actions.

The PaLM-E-562B model with 562B parameters combines language, vision, and embodied reasoning to allow robots to follow natural language commands, recognize objects, and perform routine tasks.

??Explore Multimodal Large Diffusion Language Models (MMaDA), a diffusion-based model that blends text and image understanding, now runnable on DigitalOcean GPU Droplets for faster, cost-effective AI generation.

Confused between NLP and NLU? Read this article to understand how they differ and work together to make AI conversations more human-like.

Difference between LAM vs LLM agents

While both LAMs and LLMs agents are built on deep learning architectures and use transformer-based models, their goals, inputs, and outputs are fundamentally different. LLM agents operate in the domain of language and symbolic reasoning, whereas LAMs extend these capabilities into physical interaction and real-world decision-making.

Feature LAM LLM agent
Core function Executes physical actions in real-world or simulated environments Processes and generates text-based responses or reasoning
Input types Multimodal ( vision, proprioception, tactile, language) Text (natural language prompts, structured inputs)
Output types Action policies, motor commands, trajectory plans Text responses, API calls, and task decomposition
Components Perception module, world model, action planner, control system Language encoder, memory module, and tool-calling capabilities
Use cases Robotics, autonomous navigation, and embodied AI agents Chatbots, coding assistants, and knowledge retrieval agents
Real-time feedback Responds to continuous, low-level sensor feedback from the environment (e.g., vision, proprioception) Responds to discrete feedback from tools, APIs, or user inputs during a multi-turn interaction
Training data Behavior datasets, sensory logs, and action sequences Text corpora (e.g., books, web data, code repositories)
Example model RT-2, PaLM-E, Open X-Embodiment GPT-4, Claude, Gemini, Code Llama

??You can now build an AI agent or chatbot in six steps using the DigitalOcean Gradient platform

Challenges of large action models

Despite their growing potential, LAMs face challenges when bridging high-level intelligence with real-world physical actions. These challenges arise from the complexity of environments and the computational demands of integrating perception, reasoning, and control into a single architecture.

1. Real-world grounding

LAMs must accurately interpret raw sensory data (e.g., vision, depth, touch) and map it to meaningful representations of the environment. This grounding is difficult in unstructured or cluttered settings where objects may be partially visible, occluded, or constantly moving. Small errors in perception can lead to cascading failures in downstream planning and control.

2. High data requirements

LAMs need vast and diverse multimodal datasets that pair sensory inputs with physical actions across different contexts. Gathering such data in real environments is time-consuming, expensive, and error-prone. Simulated data can help, but lacks the richness and unpredictability of the real world.

3. Safety and control precision

Executing physical actions requires precise control over actuators. A small mistake in torque or joint angle can cause hardware damage or safety risks in collaborative human-robot environments. Unlike LLMs, LAMs must adhere to real-world physical constraints, including friction, force limits, and reaction time.

4. Generalization under dynamics

LAMs must adapt to changes in the environment over time, like lighting, object positions, or user behavior, without constant retraining. Generalization becomes more difficult when environments introduce nonlinear dynamics or unexpected perturbations.

Large action model use cases

LAMs are effective in high-stakes contexts involving task decomposition, real-world interactivity, and decision-making.

1. Autonomous robotics

LAMs help robots move beyond reactive behavior by planning extended sequences of actions. In warehouse logistics, for instance, a robot can be prompted to “pick and pack all items for order #1245,” and the LAM will translate that into a structured series of steps involving object identification, path planning, manipulation, and coordination with other robots. Unlike traditional robotic systems that rely on rigidly pre-programmed instructions, LAMs adapt in real-time to environmental changes.

Covariant, an AI robotics company, uses LAM-like architectures to build robotic arms that perform warehouse pick-and-pack operations. Their system combines vision, planning, and control modules to execute long-horizon tasks. Covariant’s “Foundations Model for Robotics” is trained on millions of real-world actions to generalize across different object types and workflows.

2. Game-playing agents

In game development and open-world simulations, LAMs support agents in making long-term decisions with branching possibilities. Instead of selecting the next move based solely on immediate reward, the model can plan a series of actions such as resource gathering, unit training, and strategic positioning, aligned to high-level goals like “dominate territory” or “build a trade empire.” This mimics human-like reasoning and planning.

DeepMind’s AlphaStar used LAM-like structures to compete in StarCraft II, planning actions across multiple time horizons. More recently, OpenAI Five, while not strictly called an LAM, shares a similar architecture for implementing a high-level strategy in Dota 2. The agent learned to manage long action sequences involving team coordination, map control, and item optimization.

??Choosing the right game hosting provider can free up your developers, cut costs, and help you focus on building great gaming experiences.

3. User interface automation

LAMs are not limited to controlling physical robots; they can also perform complex tasks in digital environments by perceiving screen elements, planning action sequences, and executing them in real time. Unlike traditional automation scripts, LAMs adapt to changing UI layouts and can handle multimodal instructions.

Rabbit’s LAM Playground has an agent that takes visual snapshots of a webpage, interprets elements like buttons and text fields, and performs user-guided actions like booking tickets, filling forms, or managing apps. These actions are executed live through the Rabbithole browser interface, applying real-time perception-action loops in a digital setting.

4. Robotics-assisted surgery

In healthcare, surgical robots guided by LAMs can assist with complex procedures by following a multi-step surgical plan generated from high-level instructions. For example, “perform laparoscopic appendectomy” involves dozens of coordinated actions like incision, camera navigation, dissection, and suturing. LAMs allow for fine-grained control while maintaining contextual awareness of the surgical workflow. While promising, LAMs in surgical robotics are still in the experimental phase. Clinical deployment requires extensive validation, regulatory approvals, and oversight to ensure patient safety and ethical compliance.

Researchers at Johns Hopkins developed an autonomous surgical robot that performed soft-tissue surgery using a vision-planning-action loop akin to LAMs. The Smart Tissue Autonomous Robot (STAR) executed tasks like suturing with higher precision than human surgeons.

5. Scientific research automation

LAMs are being applied in laboratory automation to plan and carry out experimental procedures. Given a goal like “synthesize a new material with desired conductivity,” a LAM can design experiments, control robotic lab equipment, adjust parameters based on observations, and refine hypotheses.

ORGANA is a robotic assistant that interacts with chemists, controls lab devices, and performs multi-step experiments such as solubility tests, pH measurement, and electrochemistry. It uses vision-guided planning and real-time feedback to sequence complex protocols (19 steps in one test), adapting to experimental data.

References

Large action models FAQ

Can I build and train a large action model on my own infrastructure?

Yes, but it requires GPU resources, high-throughput data pipelines, and robotics or simulation environments. Using cloud GPU infrastructure like DigitalOcean GPU Droplets can help you scale without upfront hardware investment.

How are LAMs tested before being deployed in the real world?

LAMs are usually tested in high-fidelity simulation environments that mimic real-world physics and visuals. These simulations help validate perception and action planning without risking hardware or safety.

Are LAMs only useful for robotics?

Not at all. While robotics is an important application, LAMs are also used in digital domains like UI automation, scientific research, and strategy games, anywhere agents need to plan and act based on sensory input and goals.

Accelerate your AI projects with DigitalOcean Gradient GPU Droplets

Accelerate your AI/ML, deep learning, high-performance computing, and data analytics tasks with DigitalOcean Gradient GPU Droplets. Scale on demand, manage costs, and deliver actionable insights with ease. Zero to GPU in just 2 clicks with simple, powerful virtual machines designed for developers, startups, and innovators who need high-performance computing without complexity.

Key features:

  • Powered by NVIDIA H100, H200, RTX 6000 Ada, L40S, and AMD MI300X GPUs

  • Save up to 75% vs. hyperscalers for the same on-demand GPUs

  • Flexible configurations from single-GPU to 8-GPU setups

  • Pre-installed Python and Deep Learning software packages

  • High-performance local boot and scratch disks included

  • HIPAA-eligible and SOC 2 compliant with enterprise-grade SLAs

Sign up today and unlock the possibilities of DigitalOcean Gradient GPU Droplets. For custom solutions, larger GPU allocations, or reserved instances, contact our sales team to learn how DigitalOcean can power your most demanding AI/ML workloads.

About the author

Sujatha R
Sujatha R
Author
Technical Writer
See author profile

Sujatha R is a Technical Writer at DigitalOcean. She has over 10+ years of experience creating clear and engaging technical documentation, specializing in cloud computing, artificial intelligence, and machine learning. ?? She combines her technical expertise with a passion for technology that helps developers and tech enthusiasts uncover the cloud’s complexity.

Related Resources

Articles

What is Agentic Commerce? Exploring AI Shopping Agents

Articles

What are Agentic Browsers? Exploring AI-native Web Navigation

Articles

Your Guide to the TradingAgents Multi-Agent LLM Framework

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.

ena是什么检查项目 亭亭净植的亭亭是什么意思 专注力是什么意思 脖子左侧疼是什么原因 林冲的绰号是什么
脚底板痛挂什么科 大熊猫为什么有黑眼圈 kaws是什么牌子 哎呀是什么意思 重日是什么意思
往生净土是什么意思 感冒发烧不能吃什么食物 酒干倘卖无什么意思 牙齿痒是什么原因 泳帽的作用是什么
幼儿贫血吃什么补血最快 酸辣粉是什么粉 睾丸痛什么原因 炭疽病用什么药最好 低筋面粉可以用什么代替
cv是什么dayuxmw.com 海米是什么东西hcv8jop5ns6r.cn 二姨子是什么意思hcv8jop7ns8r.cn 修罗道是什么意思hcv9jop2ns6r.cn 祸从口出什么意思hcv9jop4ns5r.cn
美丽的近义词是什么hcv9jop3ns5r.cn 东莞五行属什么hcv8jop3ns1r.cn 阿司匹林治什么病hcv9jop5ns7r.cn 孕妇吃香蕉对胎儿有什么好处hcv8jop5ns4r.cn 长期手淫有什么危害hcv9jop2ns8r.cn
朱顶红什么时候开花hcv9jop0ns5r.cn 女女叫什么hcv7jop7ns2r.cn 2月2号是什么星座hcv9jop1ns4r.cn 23年属什么生肖96micro.com 甲状腺结节对身体有什么影响hcv8jop6ns0r.cn
小孩子隔三差五流鼻血什么原因hcv9jop2ns2r.cn 三言两语是什么生肖hcv8jop2ns0r.cn 背疼是什么原因引起的女人hcv9jop0ns9r.cn 白化病有什么危害吗hcv7jop5ns0r.cn 鹰击长空是什么意思hcv9jop4ns5r.cn
百度