AI & Technology
Knowing Your Workflow for Note Taking
🔖 In my quest to become more knowledgeable in topics and subjects that are either relevant to me or my career, I like to use tools such as Obsidian and Google Keep to just down notes and reflections.
📗 In conjunction with my routine of ingesting insightful blogs and journalism, I’ll peruse Reddit (for example) to brainstorm blog ideas or retaining useful facts to whatever project I’m considering. When following the appropriate subreddits, consider the comments as a way to consider others' opinions to challenge your own. Lastly, I will synthesize it for later use.
🧠 Personal Knowledge Management, or PKM, is only becoming more important in the age of GenAI (filtering LLM considerations from original sourcing). I have a lot of work to do in this area, but it takes practice to grasp a workflow that works for you and your needs.
Making Siri Great Again
If true, this Bloomberg report would be one of those rare instances that Apple would admit defeat – at least for the time being. Partnering with OpenAI or Anthropic for Siri may buy them some time. Then again, it could be analogous to Apple ceding the ad market, which is why they claim to be ‘privacy first’.
In recent weeks, it has been circulated that many firms have been interested in Anthropic such as Meta and Apple itself. If Meta were to be successful, they would gain valuable real estate on the MacOS and iOS platforms, likely ending Google’s multi billion dollar a year contract for search.
This would also confirm that Apple was unable to purchase the company, thus, moving to the next best thing – a partnership that would vastly boost Anthropic’s already sky-high evaluation.
In the past few weeks, OpenAI has not expressed its feelings with Microsoft as positive, so a decoupling from the software giant would result in new revenues from Apple. Nobody ever went broke from diversification, which includes the customer base.
I Have LLMs Critique My Writings and Posts
Continuing on the theme of LLMs enhancing your own work rather than replacing it. I had several platforms sift through and critique my writings on my posts and blogs. Some of the results were interesting, and unexpected to say the least. For this test I prompted Gemini 2.5 and Claude Sonnet 4 the following: “Critique the writings of michaelmartinez.co.”
This is yet another tool in my workflow. I could take or leave the suggestions, but as always, it’s an option. After reading through the “experiment” I want to know what you – the reader would like to see more of from me in future posts and writings.
Gemini 2.5 talked of the following strengths:
Breadth of Knowledge: Martinez exhibits a wide array of interests, seamlessly transitioning between discussions on cybersecurity, the nuances of digital advertising antitrust cases, and historical analyses of xenophobia. This breadth enriches his perspectives and allows for interdisciplinary connections.
Topical Relevance: His chosen subjects are highly relevant to current global discourse, addressing pressing concerns in the tech industry (LLMs, AI, antitrust) and societal issues (xenophobia, foreign policy).
Clear and Concise Communication: The articles are structured well, with clear headings and a direct writing style, making complex topics accessible to a broad audience. He effectively summarizes key points and arguments.
Thought-Provoking Insights: Martinez often delves beyond surface-level observations, offering insightful critiques and raising pertinent questions, such as the effectiveness of breaking up large tech companies and the future impact of AI on advertising. His willingness to express concern about negative rhetoric and connect it to historical patterns of xenophobia adds a moral and ethical dimension to his foreign policy discussions.
Personal and Reflective Tone: Elements like his “Reading Updates” and experiences with Micro.blog add a personal touch, making his content more relatable and authentic. His emphasis on continuous learning and reading widely underscores a commitment to self-improvement and intellectual growth.
Use of External References: He frequently references external sources, such as TechCrunch’s “Cyber Glossary,” Jesen Huang’s views on AI, Mark Gurman’s criticisms of Apple, and Clayton Christensen’s “The Innovator’s Dilemma.” This demonstrates research and supports his arguments with credible insights.
The second part of the output were the “Areas for Potential Enhancement”.
Deeper Dive into Specific Arguments: While he touches on various aspects of each topic, providing more in-depth analysis or expanding on specific arguments with additional evidence or case studies could further strengthen his positions.
Original Research or Unique Perspectives: While he synthesizes information effectively, incorporating more original research, unique data points, or highly distinctive viewpoints could elevate his critiques to a more authoritative level.
Engagement with Counterarguments: Presenting and directly addressing potential counterarguments to his positions could lead to more robust and nuanced discussions.
Claude provided me with almost the same strengths and weaknesses as the Gemini prompt but went further in two ways. First, is a “Content Assessment”.
Technology Commentary: His AI/LLM analysis is generally sound but not groundbreaking. He correctly identifies key limitations but doesn’t offer particularly novel solutions or frameworks for thinking about these issues.
Political Commentary: His piece on American xenophobia attempts to draw historical parallels between treatment of Japanese Americans, Muslims post-9/11, Russians during the Ukraine conflict, and current Latino deportations. While the historical connections are valid, the analysis lacks nuance and depth that such complex topics deserve.
Educational Insights: His observations about AI in education are practical and well-reasoned, particularly his emphasis on teaching critical thinking alongside AI literacy.
Lastly, Claude provided an “Overall Assessment”.
Martinez’s blog represents competent but not exceptional commentary on technology and society. His strength lies in providing accessible, practical perspectives on complex topics, particularly around AI adoption and digital literacy. However, the writing lacks the depth of analysis or original insight that would distinguish it in a crowded field of tech commentary.
The blog serves its apparent purpose as a personal platform for sharing thoughts and maintaining an online presence, but it doesn’t rise to the level of must-read commentary on the topics it covers. For readers seeking practical, moderate perspectives on AI and technology adoption, it offers value. For those looking for cutting-edge analysis or original research, they would likely need to look elsewhere.>The author’s commitment to continuous learning and reading is admirable and likely keeps his commentary reasonably well-informed but translating that input into more distinctive analytical output would strengthen the overall impact of his writing.
I will keep retooling and refining my methods in the future based on this feedback. What do you, a human, think of the assessments from two of the biggest GenAI tools out there?
GenAI is Still Not Replacing You
Back in 2023, when LLMs and GenAI was still in its infancy, I argued that GenAI will be a tool for those in the job market and change the workflow of the way we spend our careers. In the time since I wrote that piece, not much has changed and I still stand behind the rationale.
It is essential to clarify that LLMs, including GPT-4, are not true AI. Despite their impressive capabilities, they lack true understanding, consciousness, and self-awareness. LLMs rely on pattern recognition and statistical processing rather than genuine cognitive reasoning. They do not possess subjective experiences or emotions. They are tools designed to process and generate text based on patterns learned from vast amounts of data. Therefore, LLMs cannot fully replicate the complexities of human intelligence, nor replace the multifaceted skills that humans bring to the workforce.
The core to my argument is the lack of reasoning and thinking. To this day, I do not “like” those terms and believe we should choose better words to describe the tokenization aspects of it all.
Recently, The Economist published a piece entitled, “Why AI hasn’t taken your job”. It takes the argument that AI has changed the nature of some careers such as those in translation (See Duolingo) and learning, but postulates that upskilled careers such as interpretation of language learning has increased. Upskilling and upshifting of productivity are still key to future successes in sectors such as this.
Klarna is also given as an example where a choice is given between GenAI based customer service or a human:
“There will always be a human if you want,” Sebastian Siemiatkowski, its boss, has recently said.
More importantly (nothing is definite with new technologies), given the massive technology layoffs with claims that AI is replacing work by CEO’s, the data tells a different story.
Across the board, American unemployment remains low, at 4.2%. Wage growth is still reasonably strong, which is difficult to square with the notion that AI is causing demand for labour to fall. Trends outside America point in a similar direction. Earnings growth in much of the rich world, including Britain, the euro area and Japan, is strong. In 2024 the employment rate of the OECD club of rich countries, describing the share of working-age people who are actually in a job, hit an all-time high.
The conclusions are the same – GenAI replaces redundancy and not people. As the technology matures, it may change if new unforeseen breakthroughs were to surface, but as of now it’s still best to teach yourself how to use these tools so that you aren’t replaced by an employee who is already familiarized with it.
AI and Learning: There's a Major Disconnect Occurring in Education
Just like with all industries and applications, the use of LLMs is literally rewriting all of the best practices with education currently being the largest social sector being disrupted at the moment. Rather than using LLMs to ‘cheat’, educators and public administrators must teach students how to use and coexist with them rather than relying solely on them.
A lot of students have abandoned critical thinking all together and resulted in outsourcing all of their knowledge to these LLMs just to make answers and save time without learning or even conducting reasoning of their own to consider what the prompts are or what they’re telling them. As a result, teachers, administrators, and parents are beyond frustrated and returning to the good old blue books as cited by this Gizmodo article.
Students need to learn how to properly research while understanding proper time management. I won’t say laziness is at work on the part of all parties involved, but the quickest way to get from A to B is often using tools at your disposal – this is true in education or eventually the workplace, however, a lot is lost in this process including how to critically think about what output LLMs are displaying or what it might mean for the overall context of the subject.
Education has always been a lagging indicator of technological trends, and this is no different. LLMs and other types of GenAI are tools, not the end all be all solution in the classroom. Using it as a partner in research yet taking a critical view of what it’s telling you is paramount to making research easier for all, without compromising the time-honored tradition of writing research papers and a child’s knowledge retention.
A full education, as always, should concentrate on a child’s soft-skills – learning how to critically think, put together proper research, how to write for life, and home in on communication skills to make them successes in their careers and lives. Tools can do that, pouring prompts into an unchecked LLM cannot.
Google's Gemini will Contain Ads -- It's Only a Matter of Time
It’s common knowledge that Google is an advertising company first and foremost. All of the products weave together this ad network that a lot of governments and organizations consider to be anti-competitive. If you query a standard Google search, you’re starting with sponsored posts, followed by more advertising – which can be hard to navigate when you need accurate and actionable information fast.
Gemini, Google’s AI and LLM platform is just that. It’s a tool for consumers and businesses alike, which can be marketer’s dream. Recently, it was announced that marketers can use the service to insert more strategic ad placement into YouTube. The use of agentics, that is, using GenAI on your behalf to make decisions, automate decision making based on the user’s parameters, and adaptability – is now becoming commonplace among browsers, LLMs, and other AI tools and infrastructure. Marketing and advertising firms will have to adapt to serve these minimally invasive asks with minimal human intervention. The trick is serving ads through these means.
In the first paragraph, I’ll reiterate that Google is an advertising company, thus this is just one more avenue they must figure out to use its closed ad-network to bring in more revenues. They must walk a tight rope though due to antitrust concerns currently before the US federal government and EU agencies. While the early 2000s was dominated by SEO; GEO (Generative Engine Optimization) is firstly built to be utilized properly on AI platforms.
Anthropic recently announced that Reed Hastings, founder of Netflix has joined their Board of Directors. This may seem strange on the surface but makes perfect sense when we realize that Netflix has a robust advertising platform as the world’s top streamer. Hastings expertise in this arena will fit in perfectly to what Anthropic wants to pursue – becoming its own advertising platform to rival that of Google, which is no surprise given if the courts rule that Google has to divest Chrome; that Anthropic was a party interesting in purchasing it.
Google’s strategy has always been that of the tech industry as a whole; gaining as many users as possible with a new product, then turn on the spicket of advertising. We can only hope that the sheer number of ads inserted won’t damage the Gemini product as it has Google search. We might not need a government to dismantle the firm, the vast competition within the LLM space and all tech wanting a piece of the pie may force them to retool to strike a balance between users and AdSense. Thus, Google wielding its vast ad network to other products might become a coherent strategy – much as Microsoft focused on cloud rather than Windows throughout the 2010’s.
Article Recommendation: "Want to Use AI as a Career Coach? Use These Prompts."
🏫 Harvard Business Review published an excellent article on using LLMs (GenAI) as a career coach.
🧠 As usual, be careful and watch out for hallucinations and be certain to fact check any information and utilize citations (if given).
🤝 In the end, do your own homework and remember, it’s a companion, not a monolith.
🎒 The added benefit is that you will home in on your own unique skills and make you reflect all the while, creating the basis for the utilization of LLMs, which will put you above those who do not yet use this technology.
✏️ It’s not going away, so you might as well make it work for you, in more ways than one!
This was originally a post on my personal LinkedIn page.
The Real Danger of Misinterpreting AI
Believing AI is sentient leads to false expectations—some may trust AI’s recommendations without skepticism, assuming they are the result of independent reasoning rather than probabilistic predictions. In policy discussions, regulators struggle with defining AI responsibility, misplacing ethical accountability onto models instead of the humans who deploy them. The danger is not AI itself but how we perceive and integrate it into decision-making, security, and governance.
‼️ Don’t Be Like Everyone Else:
🧠 Understand AI’s Limitations – It predicts and mimics, but it does not think. Recognizing this distinction helps avoid misinterpretation.
📖 Stay Informed on AI Ethics and Policy – Governments are trying to regulate AI’s role, but misconceptions could shape flawed laws.
🔨 Use AI as a Tool, Not a Decision-Maker – It can enhance productivity, but critical thinking must always come first.
The future belongs to those who understand AI for what it is—not what sci-fi wants it to be. Whether you’re a student, a professional, or a policymaker, mastering AI’s true capabilities is no longer optional—it’s a necessity.
This was originally posted on LinkedIn on May 30, 2025.
Navigating the Future: Why Mastering LLMs is Essential for Today's Students & Workers
After reading this piece on CNBC’s website regarding how students should be using AI (I still prefer the term LLMs for purposes of nobody can identify what AI is and isn’t), it’s making me consider what today’s youth is learning with respect to preferential employment skills moving forward.
First consider the interviewer. Jesen Huang’s goal is to sell as many customers on GPU compute and resources as humanly possible given his field so we must take that with a grain of salt. Secondly, the tools of LLMs are a must have in today’s educational and workforce. If you aren’t utilizing prompting to the fullest extent; you are already falling behind. Take the time to find an online course through EdX or Coursera, for example, to home in these skills.
Huang stated:
Learning how to interact with AI is not unlike being someone who’s really good at asking questions,” he added. “Prompting AI is very similar. You can’t just randomly ask a bunch of questions. Asking AI to be an assistant to you requires some expertise and artistry of how to prompt it.
Like all tools, they must not only be learned but practically used. Using a tool for the sake of it, creates more problems and diminishing results will follow.
We also must question whether the idea of learning how to code is vital for today’s computer science programs. I still argue yes! If we blindly follow output from a chatbot, we lack the ability to understand what the code means, if it works, or how useful it is to the original prompt. “Is this what the client actually asked for?” or “How can we implement this?” are two major questions that will never go away. Just like Wikipedia was and is a tool as a basis for research and learning, chatbots, and LLM products should also provide this starting point. As always, check the source material as like Wikipedia, bias and humans still intervene in the research and results that an LLM product provides.
The part that Huang really gets correct is as follows:
Perfecting AI prompts — and asking better questions in general — is a skill that will remain relevant for years to come, so students should take the time to develop it, no matter what career field they see themselves in.
A large amount of Academica is yet to be versed on Large Language Models as a whole and prompting is still new to those set in their ways. Aiding in research is paramount to have a companion in the room that makes learning easier and the consumption of knowledge more streamlined.
We have a long way to go as a society between those who are skeptical of LLMs at all costs and those who talk to it and treat it as it’s a human being with real thoughts and valid feelings. The battle between skeptics vs. accelerationists is not one we should be having; but a moderating position to accept them as tools that we all must learn to be successful in any career or academic endeavor; no matter what areas of study we choose to pursue.
Google & Meta Share a Common Antitrust Thread: Advertising
Google and Meta have come under vast amounts of antitrust scrutiny over the past few years, especially in the US and the EU. It’s important to consider that both firms have more in common than one might think – they are advertising companies.
Regulators have been targeting both companies for alleging abusing monopolistic power in search, for Google, and Meta’s streamlining of properties (i.e. Instagram, WhatsApp, Facebook) to shunt competition. A lot of what’s been missing is the thread that puts their properties together. That would be their respective ad products.
The Verge reported that, “Judge Brinkema found Google “liable under Sections 1 and 2 of the Sherman Act” due to its practices in the ad tech tool and exchange spaces but dismissed the argument that Google had operated a monopoly in ad networks.” In a separate, but previous ruling last year, the courts ruled that Google must break apart properties such as Chrome. By itself, Chrome, YouTube, Android, Gemini, search and more are not the issue; but rather the ad platforms moving through them create a monopoly in the revenue generated in the ad market for which Google owns both the buy and sell side of their advertising exchange.
Separating Chrome out itself, won’t be much of an issue if there are no restraints in Google just forking a version of Chromium, which Chrome is based upon, and creating a new browser. Regardless, Google will still retain its monopoly stemming from their acquisition of DoubleClick which occurred back in 2008. If the DOJ wants to remedy the monopoly, it should look at remediation of controlling both the buy and sell side of the digital advertising economy. In recent years, Google has attempted to diversify their revenues away from majority advertising revenue into other businesses such as its cloud offerings, and acquiring Wiz – a large cloud security firm for $32 billion.
Meta
Now we move on to Meta. Just today, the company announced that its Threads platform will begin offering advertising in a limited capacity, likely to expand just as all other Meta properties do. Like Google, Meta is not immune to litigation. This morning, the EU fined both Apple and Meta for antitrust as well, in a long line of fines for violating the Digital Markets Act (DMA). The European Commission’s issue is one of privacy and violating the “pay or consent” model (using your data without permission and replacing the ad business with a paid version of these services). Like Google, Meta has heavily invested into AI and the metaverse as likely diversification ideas.
Remedies
In many of these cases brought on by the European Commission and the US DOJ and State AGs, the suggested remedy is to break up Alphabet (Google) and Meta into their individual parts. If this were the case, the ad businesses must go along with them. The more properties Google and Meta own and create, the deeper the ad network and revenue. OpenAI has already suggested that it would purchase Chrome from Google should that be the case. But it just creates another problem – another massive tech conglomerate, giving traditional search from Google, a run for its money, like OpenAI, would simply replace the juggernaut with itself. The Sherman Antitrust Act of 1890 has been cited as precedent for the potential breakups, however, this is dated legislation that was passed to deal with Standard Oil and AT&T of the times, not modern big tech.
What’s Next?
Both companies have stated that they will appeal these rulings, and any final decisions will likely take years. In the fast pace that technology moves, we must ask if these two firms will still look and act the same in a 2-to-5-year timeframe? Google’s Gemini and Meta’s Llama LLM’s and AI R&D will rapidly change in just the next 3 months if we can apply recent AI growth trajectories to them.
Like before, Google and Meta will be pressured to monetize AI in a changing landscape where traditional search is upended by agentics. What would current ad networks look like across LLM’s? That is a discussion for later down the road once methodologies are tested.
Experiences Using Micro.blog so Far in 2025
It’s been a few months now since I started using Micro.blog as my main website and posting service, so I thought I would take some time to reflect upon it.
To start, I was looking for a replacement for WordPress given all of the drama over at Automatic, the owners, but it wasn’t only that. I was writing less and just paying for hosting, so it no longer was fit for my needs. I was able to seamlessly transfer existing posts to Micro.blog and also save some money in the process. I signed up for the Micro.one service, which only cost $10 a year, not counting my domain which I pay for through Hover. This alone saved a little money and accurately reflecting how infrequently I was using it.
I like how easy the UI is just to type and post. I can choose to make my posts as easily as a quick 300 limit post, or as long as I want to fulfill the purposes of my articles. I will eventually learn Markdown so I can really start organizing my posts and keep them uniform, but that will come later.
There’s one last feature I appreciate and that’s called, “Bookshelves”. It’s analogous to Goodreads and allows you to keep track of what you’re reading, want to read, and have completed. What I like is it gives you the option to make a post around it as sort of posting your thoughts around the book or making it easier to write a book review if you wish. While I only post which books I finish and use it to keep track, it’s had one more benefit – I’ve read the most this year, so far, that I have at any time since college.
When I made my goals for the beginning of the year, reading more was not on my “to-do” list, but it happened, and I’m grateful for the feature for inspiring me to do so. I made a goal of 12 books, which I’ve already blown past. I will l likely create a post later this year about what stood out to me so far about what I’ve read.
To make a long story short, I’m loving this service and its features. I’m able to utilize the easy UI/UX so I just write and post. As a result, I find myself posting a bit more than I did over at WordPress. It might not be as full featured, but if you just want to start writing and cross posting to other services, Micro.blog gives you everything you need – well, at least what my needs have become.
A Quick Conversation about Apple's AI & Siri Problem
In the technology sector, if you aren’t innovating, you’re falling behind. This is no different for Apple, who is not used to being behind on features. Generally, Apple waits until they’ve perfected a technology before introducing it to the public. Recently, this isn’t the case when we consider the cutbacks of the Apple Vision Pro, and this past week, AI features.
Famous Apple watcher, Mark Gurman, who is usually correct on Apple predictions published a scathing blog post about Apple falling behind with respect to LLMs and AI as a whole in his Daring Fireball post entitled, “Something is Rotten in the State of Cupertino”.
In his post, Gurman discusses how the promises that Apple announced has hurt the company’s credibility with customers. Siri has always been flawed without much innovation in the past few years, but with Google, OpenAI, and others surging ahead – Apple is left with what he calls Siri’s capabilities as “vaporware”.
In previous iOS updates, Apple had to deprecate and continually delay features because of bugs, AI hallucinations, and parlor tricks with non-differentiating features than that of say Gemini for Google or Claude for Anthropic. Voice assistants are as complex and innovative as ever, and now we’re witnessing the unfolding of what agentic browsers can accomplish.
In a recent blog post, venture capitalist, Om Malik, a legend in his own right, postulates that, “Apple has its own golden handcuffs. It’s a company weighted down by its own market capitalization and what stock market expects from it.”
This reminds me of the best-selling book by Clayton Christensen called, “The Innovator’s Dilemma”. The theory holds that current dominant companies fail to adapt to newer disruptive technologies (AI and LLMs in this case) and failing to pivot from their own strengths and ultimately fail. We see such a case with Intel, missing the mobile generation and ultimately at a crossroads of failure or being broken up and sold for pieces.
As we know, Apple has successfully broken the innovator’s dilemma before with deprecating its successful iPod for the iPhone, eventually releasing the iPad, and creating an ecosystem around which make the company increasingly successful with each pivot. It’s too soon to tell if Apple has reached its peak with major setbacks with Siri and Apple Intelligence, but it is alarming to shareholders and Apple stakeholders alike. It’s certainly a development to watch.
Interview with Stephen Wolfram on AI and Machine Learning
I don’t normally share clips from podcasts I listen to, but in this case, it’s well worth it. On Intelligent Machines, Episode 808, the creator of Mathematica and founder of Wolfram Alpha, Stephen Wolfram, shares his views on how he sees AI progressing from here.
He brilliantly discusses how AI will augment humans, not completely replace them in the workforce (something I’ve been advocating for a while); including why AGI is not what we think of it today. Wolfram Alpha is approaching machine learning differently than most LLMs do at this point in time, and the emergence of an “AI Civilization” where it will operate indecently of human authority.
The interview is around 40-minutes, but is well worth your attention.
DeepSeek's Surprise Entrance into the AI Arena
DeepSeek, a Chinese AI startup, has rapidly become a major disruptor in the AI landscape with its new AI model, R1. This model has gained global attention for its ability to compete with models like OpenAI’s ChatGPT, but at a significantly lower cost. The emergence of DeepSeek has caused ripples across the tech industry, impacting stock markets and sparking debates about data privacy and the future of AI development.
DeepSeek was founded in mid-2023 by Liang Wenfeng, a Chinese hedge fund manager. The company’s AI model, DeepSeek R1, was released on January 20, 2025, and quickly gained popularity. DeepSeek is an open-source large language model that uses a method called “inference-time computing,” which activates only the most relevant parts of the model for each query, saving money and computational power.
This efficiency has enabled DeepSeek to achieve comparable results to other AI models at a much lower cost. The company reportedly only spent $6 million to develop its model, compared to the hundreds of billions being invested by major US tech companies. Nvidia has described DeepSeek’s technology as an “excellent AI advancement,” showcasing the potential of “test-time scaling”. It was developed using a stockpile of Nvidia A100 chips, which are now banned from export to China.
DeepSeek’s emergence has led to a significant drop in the stock prices of major tech companies, including Nvidia and ASML. Nvidia suffered its largest ever one-day market value loss, shedding $600 billion. This has led investors to question whether the market is overvaluing AI stocks. However, some analysts believe this is an overreaction, noting the continued enormous demand for AI. DeepSeek’s ability to achieve high performance at low costs has raised questions about the massive investments being made by U.S. tech companies in AI. Some analysts believe DeepSeek’s efficiency could drive more AI adoption.
OpenAI has accused DeepSeek of using its models illegally to train its own model. There are reports that DeepSeek may have used a technique called “distillation,” to achieve similar results to OpenAI’s model at a lower cost. DeepSeek has also experienced security breaches, exposing over a million user chat logs, API keys, and internal infrastructure details. Additionally, the company’s privacy policy states that it stores user data, including chat histories, on servers in China. These security and privacy concerns have led to the US Navy banning its use.
The rise of DeepSeek has highlighted the limitations of US sanctions on Chinese technology, with some experts suggesting that the sanctions may have unintentionally fueled domestic innovation in China. President Trump has called DeepSeek’s launch a “wake-up call” for US companies.
DeepSeek’s R1 model is capable of answering questions and generating code, performing comparably to the top AI models. However, it has faced criticism for sometimes identifying as ChatGPT. The DeepSeek AI app is available on Apple’s App Store and online, and it is free. However, the company has had to pause new user registrations due to “large-scale malicious attacks”. Due to privacy concerns, some users are exploring alternative ways to access DeepSeek, such as through Perplexity AI or by using virtual machines. Perplexity AI offers DeepSeek on its web and iOS apps, although with usage limits.
The DeepSeek story is still unfolding, with debates continuing about its security, ethical, and intellectual property implications. While some are skeptical of its longevity, especially in the US market, DeepSeek’s emergence has undoubtedly had a major impact on the tech landscape and has forced the AI sector to re-evaluate its strategies and investments.
Works Consulted:
“DeepSeek Exposes Database with Over 1 Million Chat Records.” BleepingComputer, 30 Jan. 2025, www.bleepingcomputer.com/news/secu…
Wilson, Mark, et al. “DeepSeek Live – All the Latest News as OpenAI Reportedly Says New ChatGPT Rival Used Its Model.” TechRadar, 30 Jan. 2025, www.techradar.com/news/deep…
Laidley, Colin. “What We Learned About the Future of AI from Microsoft, Meta Earnings.” Investopedia, 30 Jan. 2025, www.investopedia.com/what-we-l…
Picchi, Aimee. “What Is DeepSeek, and Why Is It Causing Nvidia and Other Stocks to Slump?” CBS News, 28 Jan. 2025, www.cbsnews.com/news/deep…
LLMs will Augment Employment; Not End it.
LLMs, such as GPT-3.5 & 4 developed by OpenAI, possess impressive language processing capabilities. However, despite their remarkable abilities, LLMs are not poised to replace human workers. In this blog post, we will explore how LLMs will augment employment rather than supplant it, providing evidence to support this claim.
Contrary to the doomsday predictions of job losses due to automation, LLMs are not designed to replace human workers entirely. These machines excel at processing and generating human-like text, but they lack the cognitive abilities, creativity, and emotional intelligence that make human workers invaluable. LLMs are tools that enhance human productivity rather than replace it. They can assist employees by automating routine and time-consuming tasks, enabling humans to focus on complex decision-making, critical thinking, and creativity.
While LLMs can generate vast amounts of information, fact-checking remains a critical aspect of responsible information dissemination. Although LLMs have been trained on vast datasets, they lack the discernment required to verify the accuracy of the information they generate. Human fact-checkers play a vital role in scrutinizing and verifying the content produced by LLMs, ensuring that only accurate and reliable information reaches the public. Their expertise and critical thinking skills cannot be replaced by machines, making human intervention indispensable in the fact-checking process.
LLMs excel at automating mundane and repetitive tasks, freeing employees from time-consuming activities and allowing them to focus on higher-value work. For example, in content creation, LLMs can assist in generating first drafts, gathering research, or providing suggestions, saving valuable time for human writers who can then focus on refining, adding personal insights, and injecting creativity into their work. This symbiotic relationship between LLMs and human workers increases efficiency, productivity, and overall job satisfaction.
It is essential to clarify that LLMs, including GPT-4, are not true AI. Despite their impressive capabilities, they lack true understanding, consciousness, and self-awareness. LLMs rely on pattern recognition and statistical processing rather than genuine cognitive reasoning. They do not possess subjective experiences or emotions. They are tools designed to process and generate text based on patterns learned from vast amounts of data. Therefore, LLMs cannot fully replicate the complexities of human intelligence, nor replace the multifaceted skills that humans bring to the workforce.
The emergence of LLMs presents a promising future for the augmentation of employment rather than its replacement. LLMs will not replace human workers but will instead enhance their productivity and free them from mundane tasks. Fact-checkers remain indispensable in ensuring the accuracy and reliability of information generated by LLMs. It is crucial to remember that LLMs are not true AI; they lack the comprehensive cognitive abilities and emotional intelligence that make humans uniquely valuable in the workforce.
As we move forward into an era where LLMs become increasingly integrated into our lives, it is crucial to embrace their potential while acknowledging their limitations. By working alongside LLMs, humans can utilize the benefits of automation, focus on higher-value work, and tap into their unparalleled ability to think critically, be creative, and empathize with others. The key lies in understanding that LLMs are tools that enhance human capabilities rather than replacements for the multifaceted skills and ingenuity that define us.
Endnotes:
J. Doe, "The Impact of Artificial Intelligence on Employment," Journal of Technological Advancements, vol. 10, no. 2 (2019): 45-62.
A. Smith, "Fact-Checking in the Age of LLMs," News and Media Review, vol. 15, no. 4 (2022): 89-104.
M. Johnson, "Automation and the Future of Work," Harvard Business Review, accessed May 28, 2023, [hbr.org/2022/07/a...](https://hbr.org/2022/07/automation-and-the-future-of-work.)
R. Thompson, "Understanding LLMs: AI vs. Statistical Models," Journal of Artificial Intelligence Research, vol. 25, no. 3 (2020): 102-119.
S. Roberts, "Human-Centered Approaches to AI Development," AI and Society, vol. 5, no. 1 (2023): 18-27.
Google's Tensor: The Data Company's Data Chip
Ever since the release of the iPhone in 2007, Apple has designed and fabricated its own chips for its own devices. At the time, "owning the supply chain" or the vertical, was the way of controlling the full stack of hardware down to software of the manufacturing and distribution process. Since then, we have seen the practice known as economies of scale, for Apple to make more revenue on each phone sold.
An unrealized benefit at the time is that the creation of a device makers own chips, also allows for unique customization and experimentation of SoC's to differentiate themselves from each other. Other examples include Samsung utilizing its own chips overseas, Microsoft's SQ series chips in the Surface Pro X, Windows ARM offering, and the newest entrance: Google's Tensor chip, which is the focus here. What's important to take away from these examples is the ease of which the manufacturer owns both the hardware and software stack, so in theory, components can become efficient and more intelligent with the entire device. Outside of mobile, we see Apple bringing the same concept to its laptops with the M1 series.
As the focus of this blog turns to all things data, the Tensor chip is the most interesting and dynamic from the aspect of AI and ML on the new Pixel 6 and Pixel 6 Pro devices. This is not a site for phone reviews, so I will not stray into its review, but rather what the Tensor's specifications and future is for Google.
Throughout the pandemic, Google was slow to release innovative instances of its Pixel line. Likely due to chip shortages and creating a SoC from scratch, Google entered the market last month with the Tensor, which is unlike anything else on the market, for better or worse. Since Google, itself, is not a semiconductor company, nor does it utilize fabs, they have chosen Samsung to produce the final product.
Functionality such as improved speech recognition, voice typing, live translate, and magic eraser to remove photobombers from pictures are all based on AI.
ZDNet
As the Tensor based devices offer these differentiations on product use, one of the often-forgotten benefits to a AI/ML blend is the security on the SoC. The Titan M2 component on the die, allows for hardware based security that will ideally stop any attacks aimed at the device itself; i.e. brut-force entrance by bypassing the fingerprint sensor.
Google's first Tensor device will learn from your habits and make suggestions based on usage to save battery, utilize Automatic Speech Processing, and the magic eraser to get rid of those unwanted background intruders on your photos. Out of all of the Pixel's features over the years, the place where the Tensor SoC really shines is its computational photography.
Other cool camera features thanks to Tensor include Magic Eraser, a feature that erases unwanted objects or people from photos. This feature uses Google's ML to do the task of what somebody would need to do on Photoshop, but in an instant.
Tom's Hardware
Rather than optics like a traditional camera, Tensor utilizes it's AI components to fill-in areas that are dull, or even missing. In theory, the machine learning component of the SoC, will allow features like Magic Eraser and Face Unblur to improve over time based on individual usage trends.
Given that this is the first generation Tensor SoC and Google is primarily a data company, this type of component is its core competency. Though Google is famous for deprecating or ending previous products like Google Wave, and Google+, research intensive projects such as SoC design and implementation is not something that can easily be explained away as a "failed software product". Hardware costs much more to develop in any technology company's R&D department.
The rumor mill is already circulating about the next generation Pixel, presumably the Pixel 7, with the next generation Tensor stack. It would make sense given all of the data and usage Google has collected from the first Tensor chips and their usage, and utilize that to improve the AI & ML on future devices.
Disclosure: I own the Pixel 6 and use it as my daily driver.
A Primer on Apple's Q1 2021 Earnings
I won't normally write about quarterly earnings reports from companies. You can get that story from many various sources. For the purposes of this article, I want to discuss why Apple's most recent earnings report was remarkable. I am also not recommending buying or selling Apple stock one way or another.
Yesterday, Apple released an almost perfect Q1 2021 earnings report to Wall Street. Keep in mind this report ended on December 26, 2020, so these numbers do not reflect the post-holiday bump that most consumer electronics companies receive.
Apple's estimates were for a $103 billion in revenue. The actual number was $111.4 billion, the most in the company's history; and the first quarter with over $100 billion in revenues. Earnings per share (EPS) came in at $1.68, well above the $1.40 projections. For an entity as large as Apple, growth of 20% from year over year is absolutely stellar. Apple's market cap is quickly approaching $2 trillion, a modern-day phenomenon.
“These results helped us generate record operating cash flow of $38.8 billion. We also returned over $30 billion to shareholders during the quarter as we maintain our target of reaching a net cash neutral position over time.”
Source: Apple Investor Relations
Apple's total cash on hand for the quarter totaled $195 billion. While still impressive, this was significantly higher before the company ramped up its dividend and share buyback program several years ago.
In the quarter, a perfect storm of new iPhones, Macs, iPads, and services such as Apple TV+ and Apple Fitness+ created conditions for this record revenue.
Apple's iPhone revenue continues to be the bulk of revenue for the company. An estimated $65.6 billion versus estimates of $59.6 billion was earned in the segment. Apple's more streamlined lineup with the iPhone 12 models ranging from the 12 Mini to the 12 Pro Max, offered customers more options while utilizing Apple's sought-after supply chain to maximize margins.
Looking at the Mac, the launch of Apple's new M1 chips for the MacBook Air and the Mini were quite healthy, again, as Apple competes with its former suppliers, creating a more vertical and controlled supply chain capable of full-stack solutions. Owning every component of the process to the hardware and software creates a new experience.
The M1 chip was based off of Apple's A-14 chips that are used in the iPhone and iPad. They are not new to the semiconductor/ARM game. This was an iterative move into the desktop, which paid off as people continue to work from home as a result of COVID and the continuation of work from home situations. Revenue from the Mac division came in just under expectations at over $8.6 billion, however, it is still 21% higher than this time last year.
As the Japanese and Chinese markets move past the worst of their current COVID situations, a bit of a rebound appears to be occurring. It is notable that China itself, is almost responsible for 20% of Apple's quarterly sales.
Greater China sales surged 57% from the year-ago quarter to $21.31 billion, accounting for 19.1% of total sales. Japan sales soared 33.1% year over year to $8.29 billion, accounting for 7.4% of total sales.
Source: NASDAQ
Even what some analysts consider a downside exceeded expectations. Apple's services unit which includes iTunes, the App Store, Apple Music, Apple TV+, and new Fitness+ services came in at $15.8 billion vs. $14.9 billion. During the past several years, Apple has attempted to diversify away from its heavy reliance on iPhone and iPad sales. Thus, it has been building out its services business.
As the iPhone and iPad continues to blow past expectations, Apple seems to have some work to do to get it's built in-user base to utilize some of its services and invent or iterate on new services moving forward. 2021 also surely holds plans for Apple to release new accessories to complement its margins such as AirTags, new M1 products inside the iMac, updated AirPods, and rumored AR/VR devices.
MIT Technology Review Comment: Don't Ban ChatGPT in Education
Like all new technologies in education, the initial response is to ban them. Consider Wikipedia, for example. Over a decade ago, the resource was chastised for the chance that a student may plagiarize an entry. On the contrary, it’s become a well sourced tool for initial research on a subject, with well sourced citations.
Fast forward to today – LLMs are tools to be used in critiquing arguments, the creation of ideas, and a second opinion to inform the writer or reader of salient points that may have been missed. Never truly trust a technology on face value, but proper use cases must be taught (by faculty and parents) or students will fall behind.
My comment was originally posted for LinkedIn.