Artificial intelligence is a technology that feels both brand-new and ancient. It seems new because recent generative models exploded into public life in just a few years. It seems ancient because people have imagined “thinking machines” for centuries. Below I give a single, readable tour. This includes the origin story and who builds and governs AI. It also covers what happens to the data and who’s winning today. Additionally, it explores what the futures, both good and bad, look like for people, nature, and the wider universe.
1) When, where and how AI was first developed — a short timeline
- Ideas and precursors (1800s–1940s). Thinkers such as Ada Lovelace sketched the conceptual groundwork for programmable machines. Later mathematicians and engineers like Babbage, Turing, and von Neumann contributed to the technical groundwork for computation. Alan Turing’s 1950 paper framed the idea of machine intelligence and proposed the “imitation game” (the Turing Test). Wikipedia+1
- Birth of the field (1956). The Dartmouth Workshop (1956) was organized by John McCarthy, Marvin Minsky, and others. It is usually treated as the founding event of “artificial intelligence” as an academic discipline. The term “artificial intelligence” was coined there. Early work included symbolic reasoning, logic, and “expert systems.” Wikipedia
- Cycles of optimism and “AI winters” (1960s–1990s). Progress alternated with disappointment as hardware and algorithms limited results — symbolic AI gave way and then neural methods re-emerged.
- Modern era (2000s–present): deep learning, data + compute. Two main factors transformed the field. The first is huge datasets, such as internet-scale text, images, and telemetry. The second is orders-of-magnitude increases in computing power with GPUs and TPUs. From around 2012 onward, deep neural networks began to outperform previous approaches in vision, speech and language tasks. By the 2020s, large foundation models (LLMs, multimodal models) became general-purpose tools. Tableau+1
Short takeaway: AI grew from theoretical ideas (Turing) into an organized research field at Dartmouth in 1956. It then evolved through symbolic systems and neural-network revivals. Finally, AI entered the current era driven by large datasets and specialized compute.
2) Why it was developed — motives and drivers
- Practical utility: automate boring, repetitive tasks (translation, search, image recognition), increase productivity, and create new products (recommendations, virtual assistants).
- Scientific curiosity: to understand intelligence — both human and machine — and to solve hard computational problems.
- Economic and strategic competition: Companies and nations view AI as a major driver of economic growth. It enhances military capability. AI also provides geopolitical leverage. The potential to reshape entire industries (healthcare, finance, logistics, entertainment) created intense private investment and public policy interest. Morgan Stanley+1

3) Who controls AI today?
Control is distributed and layered:
- Big technology companies (U.S. and China primarily) — the largest models, cloud infrastructure, and chips are concentrated in a handful of firms (examples: OpenAI, Google/DeepMind, Microsoft, Meta, Anthropic, Baidu, Huawei). These companies control the most capable models, the data center capacity, and commercial distribution channels. Market incentives and access to capital make them central gatekeepers. Rest of World+1
- Governments and regulators — they control legal access, procurement, and safety requirements. National strategies, including funding, export controls, and data rules, shape who can effectively build and deploy high-end AI. Different countries pursue different mixes of industrial policy and regulation (U.S., EU, China, etc.). The White House+1
- Academia and open-source communities — universities, labs, and open-source groups drive core research. They make knowledge public. However, cutting-edge system training often requires private compute budgets. This requirement limits full parity with industry labs. Wikipedia
Net effect: control is concentrated where capital, compute, and data meet — i.e., large companies and states — but research and open communities still influence architectures and norms.
4) What happens to the data that AI platforms collect?
Data flows and uses are central to how modern AI works, and they raise legal, ethical, and practical issues.
- Collection and storage. Platforms collect user queries, uploaded content, telemetry, and large swathes of public web content. Companies store this data for quality-improvement, training, safety monitoring, and product development. Some have explicit opt-in/opt-out settings; others change policies over time. (Example: Anthropic recently updated policies to use user chats for training unless users opt out). WIRED
- Model training. Large models are trained or fine-tuned on aggregated datasets. This can include public posts, licensed data, and, in some cases, user interactions. Regulators in some regions have challenged or limited such uses when users were not informed or consent wasn’t adequate. For example, regulators in Brazil and parts of Europe have scrutinized certain uses of personal data. They have also blocked some uses for model training. TIME+1
- Privacy risks and leakage. Models can unintentionally memorize and reproduce sensitive information; that risk is real when training data contains personal or private content. That creates legal issues under privacy regimes (GDPR, national laws) and technical challenges for differential privacy, data minimization, and auditing. TrustArc+1
- Commercialization and derivatives. Companies can monetize derivative outputs, build products on top of user data, or license models to customers. Data can also be used for targeted advertising, profiling, and other commercial applications. That raises questions about consent, ownership, and fair compensation for content creators.
- Regulatory response. Regulators are actively developing rules governing data use for AI (e.g., the EU AI Act guidance, national data-protection rulings), and courts and privacy authorities have begun issuing orders and penalties in some cases. European Data Protection Board+1
Bottom line: Data collected by AI platforms is stored. It is reused and often repurposed for training and product improvement. This practice has regulatory and privacy consequences. It is actively contested and evolving.

5) Who is benefiting the most right now?
Winners today cluster into several groups:
- Infrastructure and chip makers (first-order beneficiaries). Companies that produce GPUs, TPUs, and data-center gear, such as NVIDIA, AMD, and cloud providers, have seen massive demand. This is because large models require specialized compute. Financial analysts identify chip and infrastructure suppliers as major beneficiaries. Morgan Stanley
- Big tech platforms and cloud providers. The firms that can host, sell, and integrate models include Microsoft, Google, Amazon, Meta, and OpenAI partnerships. They monetize AI through cloud services and productivity tools. They also enhance advertising improvements and offer enterprise solutions. Rest of World+1
- Investors and AI-focused startups. Venture capital and investors are putting money into startups that offer narrow AI solutions. These include sectors like health tech and back-office automation. Many sectors are receiving AI-enabled investment boosts. These sectors include healthcare, legal, customer support, and finance. For example, a large share of health-tech funding has recently gone to AI-focused companies. World Health Expo+1
- Organizations that can deploy AI at scale. Large enterprises with data and integration capacity benefit from AI. Banks, retailers, and hospital networks see productivity gains. They can extract value faster than small players.
- Researchers and citizens (indirectly). There are big public benefits too. These include new scientific tools, faster drug discovery workflows, and accessibility improvements. However, these benefits are diluted by concentration and access barriers.
Short answer: The biggest short-term beneficiaries are those who own the compute, data, and distribution channels. These include chip manufacturers, cloud providers, and major tech companies. Furthermore, investors are funneling capital into AI-enabled sectors.

6) Predicted futures — plausible scenarios
No single prediction is certain; instead think in scenarios that combine technical progress, policy, and societal choices.
A. Augmentation & productivity boom (optimistic mainstream)
- AI becomes a ubiquitous assistant for knowledge work, research, and creativity. It accelerates productivity and lowers costs. It unlocks new services like personalized education and earlier disease detection. Economic growth rises, new classes of jobs emerge, and many routine tasks are automated. Benefits are large but uneven unless policies (retraining, redistributive measures) are put in place.
B. Concentration & inequality (likely if current trends continue)
- Value concentrates in a few firms/countries that control the most advanced models and infrastructure. This produces powerful incumbents, winner-take-most markets, and political strains. Without strong governance, inequality (wealth and bargaining power) may increase.
C. Regulatory fragmentation & geopolitics
- Different regulatory regimes (EU precautionary rules, U.S. innovation-first, China strategic control) produce fragmented standards, data localization, and supply-chain decoupling. That could slow some innovation but also spur national AI stacks and security competition. Artificial Intelligence Act+1
D. Safety and misuse risks
- Advanced models, if unconstrained, could be misused for fraud, disinformation, or automated cyber-attacks. They could also pose risks in rare catastrophic scenarios like biotech misuse or infrastructure sabotage. Governments and firms are already building monitoring and disclosure rules to reduce such risks. Recent laws (e.g., new transparency/safety measures in California) show policy is moving fast. Reuters+1
E. Environmental & resource constraints
- Continued growth in model sizes and deployment means increased electricity and water demand for data centers. This raises sustainability concerns. These concerns persist unless compute gets dramatically more efficient or powered by green energy. Research shows training and operating large generative models has a non-trivial carbon and water footprint. MIT News+1
7) Pros and cons — the tradeoffs for humanity, nature and (broadly) the universe
Pros for humanity
- Productivity and innovation: automation of repetitive work, faster scientific discovery, medical diagnostics, and better personalized services.
- Access & inclusion: language translation, assistive technologies, and democratized tools can increase access to knowledge and services.
- Solving complex problems: better climate models, optimized logistics, and improved resource allocation can help tackle big challenges.
Cons for humanity
- Job displacement & economic disruption: automation may eliminate roles before new ones are widely available. The transition could be painful without policy safety nets.
- Bias, fairness, and misinformation: models trained on biased data can reinforce stereotypes or generate harmful disinformation.
- Privacy erosion: pervasive data collection risks surveillance and loss of control over personal information. TrustArc+1
Pros for nature
- Optimized resource use: AI can reduce waste (smart grids, precision agriculture), help model ecosystems, and design greener systems.
- Climate science: faster modelling and simulations can improve climate predictions and adaptation strategies.
Cons for nature
- Energy & water consumption: Large-scale AI compute increases energy demand. It also raises cooling water needs. If powered by fossil fuels, this raises emissions. There is growing evidence of significant carbon footprints tied to training and deploying large models. Institute of Energy and the Environment+1
Pros for the wider universe (philosophical/long-term)
- Knowledge acceleration: AI could expand scientific discovery (astronomy, materials) at rates humans alone can’t, unlocking new capabilities.
- Longevity & health: improved biomedical research might extend healthy lifespans.
Cons for the wider universe (ethical/philosophical)
- Existential risk (speculative): some thinkers worry about long-run scenarios where superintelligent systems misalign with human goals. While debated, this risk motivates governance, safety research, and international coordination.
- Irreversible environmental damage: if energy and resource use spike unchecked, long-term planetary limits could be stressed.
8) What to watch and what society should do
- Transparency and data rights. Demand clearer policies about how chat logs, uploads and public content are used for training. Opt-in/opt-out mechanisms and strong data-protection enforcement matter. Recent company and regulatory moves make this a front-line issue. WIRED+1
- Regulation that balances safety and innovation. Laws like the EU AI Act and recent state-level safety disclosure laws illustrate evolving policies. They include risk-based rules, safety reporting, and standards for high-impact systems. Coordination across countries is crucial to avoid fragmentation while protecting rights. Artificial Intelligence Act+1
- Energy and environmental standards. Track data-center power sourcing and efficiency improvements. Determine whether AI providers commit to green energy or carbon offsets. Without these measures, the environmental cost will rise. MIT News+1
- Public investment in capabilities for the public good. Governments can fund open research, public-interest models, and “third-stack” infrastructure. This funding reduces dependence on a few firms. It also helps to democratize access. Brookings
9) Final, practical takeaway
AI is neither an automatic utopia nor an unavoidable catastrophe. It is a multipurpose technology. Its impact will be decided by who builds it. It also depends on who governs it and who benefits from it. Additionally, how we manage its environmental and social costs will play a role. Right now, control and profits tilt toward a few large firms and wealthy nations. Data practices are in flux and undergoing legal scrutiny. Environmental costs are real and growing. The best path ahead requires smart regulation. It needs public investment. We need transparency about data and safety. Technological effort is essential to make AI more efficient. It also needs to be more equitable.
Sources and further reading (selected)
- History and origins: Coursera / Wikipedia overview. Coursera+1
- Who controls AI / geopolitics: Rest of World analysis; Brookings on technology stacks. Rest of World+1
- Data and privacy: Wired on Anthropic policy change; EDPB opinion on data protection and AI. WIRED+1
- Beneficiaries & investment trends: Morgan Stanley and healthcare/VC coverage. Morgan Stanley+1
- Regulation & governance: EU AI Act developments; California SB 53; U.S. executive actions. Artificial Intelligence Act+2Reuters+2
- Environmental impact: MIT coverage and academic analyses of model carbon footprints. MIT News+1
I want to thank you for being part of this community. If you’re in the mood for some shopping, don’t miss the Amazon affiliate links. They are attached to this blog. From top-rated home essentials to self-care favorites, you find something perfect for the season. (As an Amazon Associate, I earn from qualifying purchases.)
Buy Now
Check out everything on Amazon that’s
Thanks for reading Chonsview blog today. Your support is highly appreciated.
Make a one-time donation
Make a monthly donation
Make a yearly donation
Choose an amount
Or enter a custom amount
Your contribution is appreciated.
Your contribution is appreciated.
Your contribution is appreciated.
DonateDonate monthlyDonate yearly