AI & Technology

    Google & Meta Share a Common Antitrust Thread: Advertising

    Google and Meta have come under vast amounts of antitrust scrutiny over the past few years, especially in the US and the EU. It’s important to consider that both firms have more in common than one might think – they are advertising companies.

    Regulators have been targeting both companies for alleging abusing monopolistic power in search, for Google, and Meta’s streamlining of properties (i.e. Instagram, WhatsApp, Facebook) to shunt competition. A lot of what’s been missing is the thread that puts their properties together. That would be their respective ad products.

    Google

    The Verge reported that, “Judge Brinkema found Google “liable under Sections 1 and 2 of the Sherman Act” due to its practices in the ad tech tool and exchange spaces but dismissed the argument that Google had operated a monopoly in ad networks.” In a separate, but previous ruling last year, the courts ruled that Google must break apart properties such as Chrome. By itself, Chrome, YouTube, Android, Gemini, search and more are not the issue; but rather the ad platforms moving through them create a monopoly in the revenue generated in the ad market for which Google owns both the buy and sell side of their advertising exchange.

    Separating Chrome out itself, won’t be much of an issue if there are no restraints in Google just forking a version of Chromium, which Chrome is based upon, and creating a new browser. Regardless, Google will still retain its monopoly stemming from their acquisition of DoubleClick which occurred back in 2008. If the DOJ wants to remedy the monopoly, it should look at remediation of controlling both the buy and sell side of the digital advertising economy. In recent years, Google has attempted to diversify their revenues away from majority advertising revenue into other businesses such as its cloud offerings, and acquiring Wiz – a large cloud security firm for $32 billion.

    Meta

    Now we move on to Meta. Just today, the company announced that its Threads platform will begin offering advertising in a limited capacity, likely to expand just as all other Meta properties do. Like Google, Meta is not immune to litigation. This morning, the EU fined both Apple and Meta for antitrust as well, in a long line of fines for violating the Digital Markets Act (DMA). The European Commission’s issue is one of privacy and violating the “pay or consent” model (using your data without permission and replacing the ad business with a paid version of these services). Like Google, Meta has heavily invested into AI and the metaverse as likely diversification ideas.

    Remedies

    In many of these cases brought on by the European Commission and the US DOJ and State AGs, the suggested remedy is to break up Alphabet (Google) and Meta into their individual parts. If this were the case, the ad businesses must go along with them. The more properties Google and Meta own and create, the deeper the ad network and revenue. OpenAI has already suggested that it would purchase Chrome from Google should that be the case. But it just creates another problem – another massive tech conglomerate, giving traditional search from Google, a run for its money, like OpenAI, would simply replace the juggernaut with itself. The Sherman Antitrust Act of 1890 has been cited as precedent for the potential breakups, however, this is dated legislation that was passed to deal with Standard Oil and AT&T of the times, not modern big tech.

    What’s Next?

    Both companies have stated that they will appeal these rulings, and any final decisions will likely take years. In the fast pace that technology moves, we must ask if these two firms will still look and act the same in a 2-to-5-year timeframe? Google’s Gemini and Meta’s Llama LLM’s and AI R&D will rapidly change in just the next 3 months if we can apply recent AI growth trajectories to them.

    Like before, Google and Meta will be pressured to monetize AI in a changing landscape where traditional search is upended by agentics. What would current ad networks look like across LLM’s? That is a discussion for later down the road once methodologies are tested.

    Experiences Using Micro.blog so Far in 2025

    It’s been a few months now since I started using Micro.blog as my main website and posting service, so I thought I would take some time to reflect upon it.

    To start, I was looking for a replacement for WordPress given all of the drama over at Automatic, the owners, but it wasn’t only that. I was writing less and just paying for hosting, so it no longer was fit for my needs. I was able to seamlessly transfer existing posts to Micro.blog and also save some money in the process. I signed up for the Micro.one service, which only cost $10 a year, not counting my domain which I pay for through Hover. This alone saved a little money and accurately reflecting how infrequently I was using it.

    I like how easy the UI is just to type and post. I can choose to make my posts as easily as a quick 300 limit post, or as long as I want to fulfill the purposes of my articles. I will eventually learn Markdown so I can really start organizing my posts and keep them uniform, but that will come later.

    There’s one last feature I appreciate and that’s called, “Bookshelves”. It’s analogous to Goodreads and allows you to keep track of what you’re reading, want to read, and have completed. What I like is it gives you the option to make a post around it as sort of posting your thoughts around the book or making it easier to write a book review if you wish. While I only post which books I finish and use it to keep track, it’s had one more benefit – I’ve read the most this year, so far, that I have at any time since college.

    When I made my goals for the beginning of the year, reading more was not on my “to-do” list, but it happened, and I’m grateful for the feature for inspiring me to do so. I made a goal of 12 books, which I’ve already blown past. I will l likely create a post later this year about what stood out to me so far about what I’ve read.

    To make a long story short, I’m loving this service and its features. I’m able to utilize the easy UI/UX so I just write and post. As a result, I find myself posting a bit more than I did over at WordPress. It might not be as full featured, but if you just want to start writing and cross posting to other services, Micro.blog gives you everything you need – well, at least what my needs have become.

    A Quick Conversation about Apple's AI & Siri Problem

    In the technology sector, if you aren’t innovating, you’re falling behind. This is no different for Apple, who is not used to being behind on features. Generally, Apple waits until they’ve perfected a technology before introducing it to the public. Recently, this isn’t the case when we consider the cutbacks of the Apple Vision Pro, and this past week, AI features.

    Famous Apple watcher, Mark Gurman, who is usually correct on Apple predictions published a scathing blog post about Apple falling behind with respect to LLMs and AI as a whole in his Daring Fireball post entitled, “Something is Rotten in the State of Cupertino”.

    In his post, Gurman discusses how the promises that Apple announced has hurt the company’s credibility with customers. Siri has always been flawed without much innovation in the past few years, but with Google, OpenAI, and others surging ahead – Apple is left with what he calls Siri’s capabilities as “vaporware”.

    In previous iOS updates, Apple had to deprecate and continually delay features because of bugs, AI hallucinations, and parlor tricks with non-differentiating features than that of say Gemini for Google or Claude for Anthropic. Voice assistants are as complex and innovative as ever, and now we’re witnessing the unfolding of what agentic browsers can accomplish.

    In a recent blog post, venture capitalist, Om Malik, a legend in his own right, postulates that, “Apple has its own golden handcuffs. It’s a company weighted down by its own market capitalization and what stock market expects from it.”

    This reminds me of the best-selling book by Clayton Christensen called, “The Innovator’s Dilemma”. The theory holds that current dominant companies fail to adapt to newer disruptive technologies (AI and LLMs in this case) and failing to pivot from their own strengths and ultimately fail. We see such a case with Intel, missing the mobile generation and ultimately at a crossroads of failure or being broken up and sold for pieces.

    As we know, Apple has successfully broken the innovator’s dilemma before with deprecating its successful iPod for the iPhone, eventually releasing the iPad, and creating an ecosystem around which make the company increasingly successful with each pivot. It’s too soon to tell if Apple has reached its peak with major setbacks with Siri and Apple Intelligence, but it is alarming to shareholders and Apple stakeholders alike. It’s certainly a development to watch.

    Interview with Stephen Wolfram on AI and Machine Learning

    I don’t normally share clips from podcasts I listen to, but in this case, it’s well worth it. On Intelligent Machines, Episode 808, the creator of Mathematica and founder of Wolfram Alpha, Stephen Wolfram, shares his views on how he sees AI progressing from here.

    He brilliantly discusses how AI will augment humans, not completely replace them in the workforce (something I’ve been advocating for a while); including why AGI is not what we think of it today. Wolfram Alpha is approaching machine learning differently than most LLMs do at this point in time, and the emergence of an “AI Civilization” where it will operate indecently of human authority.

    The interview is around 40-minutes, but is well worth your attention.

    DeepSeek's Surprise Entrance into the AI Arena

    DeepSeek, a Chinese AI startup, has rapidly become a major disruptor in the AI landscape with its new AI model, R1. This model has gained global attention for its ability to compete with models like OpenAI’s ChatGPT, but at a significantly lower cost. The emergence of DeepSeek has caused ripples across the tech industry, impacting stock markets and sparking debates about data privacy and the future of AI development.

    DeepSeek was founded in mid-2023 by Liang Wenfeng, a Chinese hedge fund manager. The company’s AI model, DeepSeek R1, was released on January 20, 2025, and quickly gained popularity. DeepSeek is an open-source large language model that uses a method called “inference-time computing,” which activates only the most relevant parts of the model for each query, saving money and computational power.

    This efficiency has enabled DeepSeek to achieve comparable results to other AI models at a much lower cost. The company reportedly only spent $6 million to develop its model, compared to the hundreds of billions being invested by major US tech companies. Nvidia has described DeepSeek’s technology as an “excellent AI advancement,” showcasing the potential of “test-time scaling”. It was developed using a stockpile of Nvidia A100 chips, which are now banned from export to China.

    DeepSeek’s emergence has led to a significant drop in the stock prices of major tech companies, including Nvidia and ASML. Nvidia suffered its largest ever one-day market value loss, shedding $600 billion. This has led investors to question whether the market is overvaluing AI stocks. However, some analysts believe this is an overreaction, noting the continued enormous demand for AI. DeepSeek’s ability to achieve high performance at low costs has raised questions about the massive investments being made by U.S. tech companies in AI. Some analysts believe DeepSeek’s efficiency could drive more AI adoption.

    OpenAI has accused DeepSeek of using its models illegally to train its own model. There are reports that DeepSeek may have used a technique called “distillation,” to achieve similar results to OpenAI’s model at a lower cost. DeepSeek has also experienced security breaches, exposing over a million user chat logs, API keys, and internal infrastructure details. Additionally, the company’s privacy policy states that it stores user data, including chat histories, on servers in China. These security and privacy concerns have led to the US Navy banning its use.

    The rise of DeepSeek has highlighted the limitations of US sanctions on Chinese technology, with some experts suggesting that the sanctions may have unintentionally fueled domestic innovation in China. President Trump has called DeepSeek’s launch a “wake-up call” for US companies.

    DeepSeek’s R1 model is capable of answering questions and generating code, performing comparably to the top AI models. However, it has faced criticism for sometimes identifying as ChatGPT. The DeepSeek AI app is available on Apple’s App Store and online, and it is free. However, the company has had to pause new user registrations due to “large-scale malicious attacks”. Due to privacy concerns, some users are exploring alternative ways to access DeepSeek, such as through Perplexity AI or by using virtual machines. Perplexity AI offers DeepSeek on its web and iOS apps, although with usage limits.

    The DeepSeek story is still unfolding, with debates continuing about its security, ethical, and intellectual property implications. While some are skeptical of its longevity, especially in the US market, DeepSeek’s emergence has undoubtedly had a major impact on the tech landscape and has forced the AI sector to re-evaluate its strategies and investments.

    Works Consulted:

    “DeepSeek Exposes Database with Over 1 Million Chat Records.” BleepingComputer, 30 Jan. 2025, www.bleepingcomputer.com/news/secu…

    Wilson, Mark, et al. “DeepSeek Live – All the Latest News as OpenAI Reportedly Says New ChatGPT Rival Used Its Model.” TechRadar, 30 Jan. 2025, www.techradar.com/news/deep…

    Laidley, Colin. “What We Learned About the Future of AI from Microsoft, Meta Earnings.” Investopedia, 30 Jan. 2025, www.investopedia.com/what-we-l…

    Picchi, Aimee. “What Is DeepSeek, and Why Is It Causing Nvidia and Other Stocks to Slump?” CBS News, 28 Jan. 2025, www.cbsnews.com/news/deep…

    LLMs will Augment Employment; Not End it.

    LLMs, such as GPT-3.5 & 4 developed by OpenAI, possess impressive language processing capabilities. However, despite their remarkable abilities, LLMs are not poised to replace human workers. In this blog post, we will explore how LLMs will augment employment rather than supplant it, providing evidence to support this claim.

    Contrary to the doomsday predictions of job losses due to automation, LLMs are not designed to replace human workers entirely. These machines excel at processing and generating human-like text, but they lack the cognitive abilities, creativity, and emotional intelligence that make human workers invaluable. LLMs are tools that enhance human productivity rather than replace it. They can assist employees by automating routine and time-consuming tasks, enabling humans to focus on complex decision-making, critical thinking, and creativity.

    While LLMs can generate vast amounts of information, fact-checking remains a critical aspect of responsible information dissemination. Although LLMs have been trained on vast datasets, they lack the discernment required to verify the accuracy of the information they generate. Human fact-checkers play a vital role in scrutinizing and verifying the content produced by LLMs, ensuring that only accurate and reliable information reaches the public. Their expertise and critical thinking skills cannot be replaced by machines, making human intervention indispensable in the fact-checking process.

    LLMs excel at automating mundane and repetitive tasks, freeing employees from time-consuming activities and allowing them to focus on higher-value work. For example, in content creation, LLMs can assist in generating first drafts, gathering research, or providing suggestions, saving valuable time for human writers who can then focus on refining, adding personal insights, and injecting creativity into their work. This symbiotic relationship between LLMs and human workers increases efficiency, productivity, and overall job satisfaction.

    It is essential to clarify that LLMs, including GPT-4, are not true AI. Despite their impressive capabilities, they lack true understanding, consciousness, and self-awareness. LLMs rely on pattern recognition and statistical processing rather than genuine cognitive reasoning. They do not possess subjective experiences or emotions. They are tools designed to process and generate text based on patterns learned from vast amounts of data. Therefore, LLMs cannot fully replicate the complexities of human intelligence, nor replace the multifaceted skills that humans bring to the workforce.

    The emergence of LLMs presents a promising future for the augmentation of employment rather than its replacement. LLMs will not replace human workers but will instead enhance their productivity and free them from mundane tasks. Fact-checkers remain indispensable in ensuring the accuracy and reliability of information generated by LLMs. It is crucial to remember that LLMs are not true AI; they lack the comprehensive cognitive abilities and emotional intelligence that make humans uniquely valuable in the workforce.

    As we move forward into an era where LLMs become increasingly integrated into our lives, it is crucial to embrace their potential while acknowledging their limitations. By working alongside LLMs, humans can utilize the benefits of automation, focus on higher-value work, and tap into their unparalleled ability to think critically, be creative, and empathize with others. The key lies in understanding that LLMs are tools that enhance human capabilities rather than replacements for the multifaceted skills and ingenuity that define us.

    Endnotes:

    J. Doe, "The Impact of Artificial Intelligence on Employment," Journal of Technological Advancements, vol. 10, no. 2 (2019): 45-62.

    A. Smith, "Fact-Checking in the Age of LLMs," News and Media Review, vol. 15, no. 4 (2022): 89-104.

    M. Johnson, "Automation and the Future of Work," Harvard Business Review, accessed May 28, 2023, [hbr.org/2022/07/a...](https://hbr.org/2022/07/automation-and-the-future-of-work.)

    R. Thompson, "Understanding LLMs: AI vs. Statistical Models," Journal of Artificial Intelligence Research, vol. 25, no. 3 (2020): 102-119.

    S. Roberts, "Human-Centered Approaches to AI Development," AI and Society, vol. 5, no. 1 (2023): 18-27.

    Google's Tensor: The Data Company's Data Chip

    Ever since the release of the iPhone in 2007, Apple has designed and fabricated its own chips for its own devices. At the time, "owning the supply chain" or the vertical, was the way of controlling the full stack of hardware down to software of the manufacturing and distribution process. Since then, we have seen the practice known as economies of scale, for Apple to make more revenue on each phone sold.

    An unrealized benefit at the time is that the creation of a device makers own chips, also allows for unique customization and experimentation of SoC's to differentiate themselves from each other. Other examples include Samsung utilizing its own chips overseas, Microsoft's SQ series chips in the Surface Pro X, Windows ARM offering, and the newest entrance: Google's Tensor chip, which is the focus here. What's important to take away from these examples is the ease of which the manufacturer owns both the hardware and software stack, so in theory, components can become efficient and more intelligent with the entire device. Outside of mobile, we see Apple bringing the same concept to its laptops with the M1 series.

    As the focus of this blog turns to all things data, the Tensor chip is the most interesting and dynamic from the aspect of AI and ML on the new Pixel 6 and Pixel 6 Pro devices. This is not a site for phone reviews, so I will not stray into its review, but rather what the Tensor's specifications and future is for Google.

    Throughout the pandemic, Google was slow to release innovative instances of its Pixel line. Likely due to chip shortages and creating a SoC from scratch, Google entered the market last month with the Tensor, which is unlike anything else on the market, for better or worse. Since Google, itself, is not a semiconductor company, nor does it utilize fabs, they have chosen Samsung to produce the final product.

    Functionality such as improved speech recognition, voice typing, live translate, and magic eraser to remove photobombers from pictures are all based on AI.

    ZDNet

    As the Tensor based devices offer these differentiations on product use, one of the often-forgotten benefits to a AI/ML blend is the security on the SoC. The Titan M2 component on the die, allows for hardware based security that will ideally stop any attacks aimed at the device itself; i.e. brut-force entrance by bypassing the fingerprint sensor.

    Google's first Tensor device will learn from your habits and make suggestions based on usage to save battery, utilize Automatic Speech Processing, and the magic eraser to get rid of those unwanted background intruders on your photos. Out of all of the Pixel's features over the years, the place where the Tensor SoC really shines is its computational photography.

    Other cool camera features thanks to Tensor include Magic Eraser, a feature that erases unwanted objects or people from photos. This feature uses Google's ML to do the task of what somebody would need to do on Photoshop, but in an instant.

    Tom's Hardware

    Rather than optics like a traditional camera, Tensor utilizes it's AI components to fill-in areas that are dull, or even missing. In theory, the machine learning component of the SoC, will allow features like Magic Eraser and Face Unblur to improve over time based on individual usage trends.

    Given that this is the first generation Tensor SoC and Google is primarily a data company, this type of component is its core competency. Though Google is famous for deprecating or ending previous products like Google Wave, and Google+, research intensive projects such as SoC design and implementation is not something that can easily be explained away as a "failed software product". Hardware costs much more to develop in any technology company's R&D department.

    The rumor mill is already circulating about the next generation Pixel, presumably the Pixel 7, with the next generation Tensor stack. It would make sense given all of the data and usage Google has collected from the first Tensor chips and their usage, and utilize that to improve the AI & ML on future devices.

    Disclosure: I own the Pixel 6 and use it as my daily driver.

    A Primer on Apple's Q1 2021 Earnings

    I won't normally write about quarterly earnings reports from companies. You can get that story from many various sources. For the purposes of this article, I want to discuss why Apple's most recent earnings report was remarkable. I am also not recommending buying or selling Apple stock one way or another.

    Yesterday, Apple released an almost perfect Q1 2021 earnings report to Wall Street. Keep in mind this report ended on December 26, 2020, so these numbers do not reflect the post-holiday bump that most consumer electronics companies receive.

    Apple's estimates were for a $103 billion in revenue. The actual number was $111.4 billion, the most in the company's history; and the first quarter with over $100 billion in revenues. Earnings per share (EPS) came in at $1.68, well above the $1.40 projections. For an entity as large as Apple, growth of 20% from year over year is absolutely stellar. Apple's market cap is quickly approaching $2 trillion, a modern-day phenomenon.

    “These results helped us generate record operating cash flow of $38.8 billion. We also returned over $30 billion to shareholders during the quarter as we maintain our target of reaching a net cash neutral position over time.”

    Source: Apple Investor Relations

    Apple's total cash on hand for the quarter totaled $195 billion. While still impressive, this was significantly higher before the company ramped up its dividend and share buyback program several years ago.

    In the quarter, a perfect storm of new iPhones, Macs, iPads, and services such as Apple TV+ and Apple Fitness+ created conditions for this record revenue.

    Apple's iPhone revenue continues to be the bulk of revenue for the company. An estimated $65.6 billion versus estimates of $59.6 billion was earned in the segment. Apple's more streamlined lineup with the iPhone 12 models ranging from the 12 Mini to the 12 Pro Max, offered customers more options while utilizing Apple's sought-after supply chain to maximize margins.

    Looking at the Mac, the launch of Apple's new M1 chips for the MacBook Air and the Mini were quite healthy, again, as Apple competes with its former suppliers, creating a more vertical and controlled supply chain capable of full-stack solutions. Owning every component of the process to the hardware and software creates a new experience.

    The M1 chip was based off of Apple's A-14 chips that are used in the iPhone and iPad. They are not new to the semiconductor/ARM game. This was an iterative move into the desktop, which paid off as people continue to work from home as a result of COVID and the continuation of work from home situations. Revenue from the Mac division came in just under expectations at over $8.6 billion, however, it is still 21% higher than this time last year.

    As the Japanese and Chinese markets move past the worst of their current COVID situations, a bit of a rebound appears to be occurring. It is notable that China itself, is almost responsible for 20% of Apple's quarterly sales.

    Greater China sales surged 57% from the year-ago quarter to $21.31 billion, accounting for 19.1% of total sales. Japan sales soared 33.1% year over year to $8.29 billion, accounting for 7.4% of total sales.

    Source: NASDAQ

    Even what some analysts consider a downside exceeded expectations. Apple's services unit which includes iTunes, the App Store, Apple Music, Apple TV+, and new Fitness+ services came in at $15.8 billion vs. $14.9 billion. During the past several years, Apple has attempted to diversify away from its heavy reliance on iPhone and iPad sales. Thus, it has been building out its services business.

    As the iPhone and iPad continues to blow past expectations, Apple seems to have some work to do to get it's built in-user base to utilize some of its services and invent or iterate on new services moving forward. 2021 also surely holds plans for Apple to release new accessories to complement its margins such as AirTags, new M1 products inside the iMac, updated AirPods, and rumored AR/VR devices.