Interview with Stephen Wolfram on AI and Machine Learning
I don’t normally share clips from podcasts I listen to, but in this case, it’s well worth it. On Intelligent Machines, Episode 808, the creator of Mathematica and founder of Wolfram Alpha, Stephen Wolfram, shares his views on how he sees AI progressing from here.
He brilliantly discusses how AI will augment humans, not completely replace them in the workforce (something I’ve been advocating for a while); including why AGI is not what we think of it today. Wolfram Alpha is approaching machine learning differently than most LLMs do at this point in time, and the emergence of an “AI Civilization” where it will operate indecently of human authority.
The interview is around 40-minutes, but is well worth your attention.
Finished reading: Hit Refresh by Satya Nadella 📚
Finished reading: The Genesis Machine by Amy Webb 📚
Finished reading: Autocracy, Inc. by Anne Applebaum 📚
The Strategic Shift: Why UPS is Rethinking its Amazon Partnership and What it Means for Last-Mile Delivery
We’re well in the middle of earnings season, but something stood out to me regarding a firm in particular. UPS revealed a significant shift in its strategy: a reduction in its delivery volume for Amazon by 50% by the end of next year. This move, while surprising to some, is a calculated step aimed at improving UPS’s profitability and streamlining its operations. Let’s delve into the reasons behind this decision and what it means for the future of last-mile delivery.
While Amazon is UPS’s largest customer, accounting for almost 12% of its revenue in 2024, the company believes that reducing its reliance on the e-commerce giant will ultimately benefit its bottom line. UPS is aiming to shift toward more profitable endeavors. This strategic pivot is crucial for enhancing UPS’s margins, which refers to the profit margin, the percentage of revenue a company keeps after subtracting its costs.
About the Change
UPS’s decision also comes amid a larger company-wide transformation The company is reconfiguring its U.S. network and launching multi-year “efficiency reimagined” initiatives to save approximately $1.0 billion through an end-to-end process redesign. This includes an initiative to insource 100% of its UPS SurePost product. These initiatives are designed to make UPS a more profitable, agile, and differentiated company.
It’s All About Last Mile
The term “last mile” refers to the final leg of a shipment’s journey from a transportation hub to the end-user’s final destination. It’s often the most complex, time-consuming, and expensive part of the shipping process. Last-mile logistics costs can be substantial – sometimes more than 50 percent of total shipping costs. Several factors contribute to these costs, including labor costs, route optimization, fleet costs, warehousing, proximity of the delivery points to the warehouse, itself, the locations, and the number of deliveries along a route.
UPS’s move to reduce Amazon deliveries is likely connected to a desire to optimize its last-mile operations and cut costs. By reducing its reliance on one large customer, UPS can gain more control over its delivery network and potentially improve its efficiency and profitability.
UPS’s Financial Performance
UPS’s fourth-quarter 2024 results show a consolidated revenue of $25.3 billion, a 1.5% increase compared to the same period last year. The company’s diluted earnings per share were $2.01, with non-GAAP adjusted diluted earnings per share at $2.75, an 11.3% increase from the previous year. These results indicate a strong financial position. UPS expects 2025 revenue to be approximately $89 billion, with an operating margin of about 10.8%.
The Future
UPS’s decision to reduce its Amazon delivery volume is a strategic move to focus on more profitable projects and enhance its operational efficiency. By optimizing its network and streamlining its last-mile deliveries, UPS is positioning itself for sustainable growth and increased profitability. This shift underscores the importance of managing last-mile logistics effectively in today’s competitive market, where efficiency and customer satisfaction are paramount.
UPS' Q4 earnings report and press release can be found here.
Finished reading: The Big Nine by Amy Webb 📚
One of my goals going into 2025 was to talk less and listen more. In this goal, I’m attempting to become more knowledgeable and a more well-rounded person. Self-improvement is important to me for my personal and professional lives. In this endeavor, I’ve been reading and listening to more eBooks and audiobooks, respectfully. You’ll see me post more about what I read and less about what I believe.
The next titles I plan on consuming (in no particular order) are as follows:
- "The Sirens' Call" by Chris Hayes
- "America's New Map" by Thomas P.M. Barnett
- "Autocracy, Inc." by Anne Applebaum
DeepSeek's Surprise Entrance into the AI Arena
DeepSeek, a Chinese AI startup, has rapidly become a major disruptor in the AI landscape with its new AI model, R1. This model has gained global attention for its ability to compete with models like OpenAI’s ChatGPT, but at a significantly lower cost. The emergence of DeepSeek has caused ripples across the tech industry, impacting stock markets and sparking debates about data privacy and the future of AI development.
DeepSeek was founded in mid-2023 by Liang Wenfeng, a Chinese hedge fund manager. The company’s AI model, DeepSeek R1, was released on January 20, 2025, and quickly gained popularity. DeepSeek is an open-source large language model that uses a method called “inference-time computing,” which activates only the most relevant parts of the model for each query, saving money and computational power.
This efficiency has enabled DeepSeek to achieve comparable results to other AI models at a much lower cost. The company reportedly only spent $6 million to develop its model, compared to the hundreds of billions being invested by major US tech companies. Nvidia has described DeepSeek’s technology as an “excellent AI advancement,” showcasing the potential of “test-time scaling”. It was developed using a stockpile of Nvidia A100 chips, which are now banned from export to China.
DeepSeek’s emergence has led to a significant drop in the stock prices of major tech companies, including Nvidia and ASML. Nvidia suffered its largest ever one-day market value loss, shedding $600 billion. This has led investors to question whether the market is overvaluing AI stocks. However, some analysts believe this is an overreaction, noting the continued enormous demand for AI. DeepSeek’s ability to achieve high performance at low costs has raised questions about the massive investments being made by U.S. tech companies in AI. Some analysts believe DeepSeek’s efficiency could drive more AI adoption.
OpenAI has accused DeepSeek of using its models illegally to train its own model. There are reports that DeepSeek may have used a technique called “distillation,” to achieve similar results to OpenAI’s model at a lower cost. DeepSeek has also experienced security breaches, exposing over a million user chat logs, API keys, and internal infrastructure details. Additionally, the company’s privacy policy states that it stores user data, including chat histories, on servers in China. These security and privacy concerns have led to the US Navy banning its use.
The rise of DeepSeek has highlighted the limitations of US sanctions on Chinese technology, with some experts suggesting that the sanctions may have unintentionally fueled domestic innovation in China. President Trump has called DeepSeek’s launch a “wake-up call” for US companies.
DeepSeek’s R1 model is capable of answering questions and generating code, performing comparably to the top AI models. However, it has faced criticism for sometimes identifying as ChatGPT. The DeepSeek AI app is available on Apple’s App Store and online, and it is free. However, the company has had to pause new user registrations due to “large-scale malicious attacks”. Due to privacy concerns, some users are exploring alternative ways to access DeepSeek, such as through Perplexity AI or by using virtual machines. Perplexity AI offers DeepSeek on its web and iOS apps, although with usage limits.
The DeepSeek story is still unfolding, with debates continuing about its security, ethical, and intellectual property implications. While some are skeptical of its longevity, especially in the US market, DeepSeek’s emergence has undoubtedly had a major impact on the tech landscape and has forced the AI sector to re-evaluate its strategies and investments.
Works Consulted:
“DeepSeek Exposes Database with Over 1 Million Chat Records.” BleepingComputer, 30 Jan. 2025, www.bleepingcomputer.com/news/secu…
Wilson, Mark, et al. “DeepSeek Live – All the Latest News as OpenAI Reportedly Says New ChatGPT Rival Used Its Model.” TechRadar, 30 Jan. 2025, www.techradar.com/news/deep…
Laidley, Colin. “What We Learned About the Future of AI from Microsoft, Meta Earnings.” Investopedia, 30 Jan. 2025, www.investopedia.com/what-we-l…
Picchi, Aimee. “What Is DeepSeek, and Why Is It Causing Nvidia and Other Stocks to Slump?” CBS News, 28 Jan. 2025, www.cbsnews.com/news/deep…
LLMs will Augment Employment; Not End it.
LLMs, such as GPT-3.5 & 4 developed by OpenAI, possess impressive language processing capabilities. However, despite their remarkable abilities, LLMs are not poised to replace human workers. In this blog post, we will explore how LLMs will augment employment rather than supplant it, providing evidence to support this claim.
Contrary to the doomsday predictions of job losses due to automation, LLMs are not designed to replace human workers entirely. These machines excel at processing and generating human-like text, but they lack the cognitive abilities, creativity, and emotional intelligence that make human workers invaluable. LLMs are tools that enhance human productivity rather than replace it. They can assist employees by automating routine and time-consuming tasks, enabling humans to focus on complex decision-making, critical thinking, and creativity.
While LLMs can generate vast amounts of information, fact-checking remains a critical aspect of responsible information dissemination. Although LLMs have been trained on vast datasets, they lack the discernment required to verify the accuracy of the information they generate. Human fact-checkers play a vital role in scrutinizing and verifying the content produced by LLMs, ensuring that only accurate and reliable information reaches the public. Their expertise and critical thinking skills cannot be replaced by machines, making human intervention indispensable in the fact-checking process.
LLMs excel at automating mundane and repetitive tasks, freeing employees from time-consuming activities and allowing them to focus on higher-value work. For example, in content creation, LLMs can assist in generating first drafts, gathering research, or providing suggestions, saving valuable time for human writers who can then focus on refining, adding personal insights, and injecting creativity into their work. This symbiotic relationship between LLMs and human workers increases efficiency, productivity, and overall job satisfaction.
It is essential to clarify that LLMs, including GPT-4, are not true AI. Despite their impressive capabilities, they lack true understanding, consciousness, and self-awareness. LLMs rely on pattern recognition and statistical processing rather than genuine cognitive reasoning. They do not possess subjective experiences or emotions. They are tools designed to process and generate text based on patterns learned from vast amounts of data. Therefore, LLMs cannot fully replicate the complexities of human intelligence, nor replace the multifaceted skills that humans bring to the workforce.
The emergence of LLMs presents a promising future for the augmentation of employment rather than its replacement. LLMs will not replace human workers but will instead enhance their productivity and free them from mundane tasks. Fact-checkers remain indispensable in ensuring the accuracy and reliability of information generated by LLMs. It is crucial to remember that LLMs are not true AI; they lack the comprehensive cognitive abilities and emotional intelligence that make humans uniquely valuable in the workforce.
As we move forward into an era where LLMs become increasingly integrated into our lives, it is crucial to embrace their potential while acknowledging their limitations. By working alongside LLMs, humans can utilize the benefits of automation, focus on higher-value work, and tap into their unparalleled ability to think critically, be creative, and empathize with others. The key lies in understanding that LLMs are tools that enhance human capabilities rather than replacements for the multifaceted skills and ingenuity that define us.
Endnotes:
J. Doe, "The Impact of Artificial Intelligence on Employment," Journal of Technological Advancements, vol. 10, no. 2 (2019): 45-62.
A. Smith, "Fact-Checking in the Age of LLMs," News and Media Review, vol. 15, no. 4 (2022): 89-104.
M. Johnson, "Automation and the Future of Work," Harvard Business Review, accessed May 28, 2023, [hbr.org/2022/07/a...](https://hbr.org/2022/07/automation-and-the-future-of-work.)
R. Thompson, "Understanding LLMs: AI vs. Statistical Models," Journal of Artificial Intelligence Research, vol. 25, no. 3 (2020): 102-119.
S. Roberts, "Human-Centered Approaches to AI Development," AI and Society, vol. 5, no. 1 (2023): 18-27.
Finished reading: Trailblazer by Marc Benioff 📚
Divestiture Debrief: The Kellogg Split
Earlier this morning before the bell, Kellogg announced that it would be splitting itself into three-separate tax-free spinoffs: the slow growing cereal business, a snacking business, and an unnamed plant-based food business mainly consisting of Morningstar Farms. In the press release, the company said splitting these businesses will unlock shareholder value. During the past year, we've seen a number of larger, slower-growing business attempt to divest to grow.
Kellogg itself was unlikely to be sold due to its slowly growing cereal business, such as what Post and General Mills have previously announced. The last time a divesture this important in the package food business occurred, Kraft spun off its snacking business into what is known today as Mondelez. In an unrelated note this morning, Mondelez announced its acquisition of Clif Bar for $2.9 billion. When we take all of these industry movements as a whole, we see that like investors, other companies will chase growth and ultimately acquire these smaller businesses from, while leaving the cereal business behind.
As the supply chain crisis squeezes margins in an already thinly profitable business, divestitures allow for easier optimization strategies, streamlining them through these separate entities. From a management standpoint, these standalone businesses are allowed to grow without interference from its original parent company. Larger companies that have difficulty growing, such as Kellogg have resulted in this strategy as of late. Let's take IBM for example. Last year, IBM spun off its legacy business, otherwise known as Kyndryl, and fully integrated its faster growing acquisition, Red Hat, into IBM's higher growth business. At the end of the day, it's all up to execution of the strategy, and IBM has yet to see much benefit of its faster growth businesses rolled up into it.
At the beginning of the year, Johnson & Johnson announced that they would be spinning its consumer brands business into a separately traded company and the parent would focus on its pharmaceutical business. Larger divestitures such as this can normally take up to 2 years, if not longer. Late last year, GE announced that it would be splitting into three companies as well: healthcare, aviation, and energy. GE never recovered from the 2008 financial crisis, flailing as the once considered iron clad company, quickly fell apart.
The jury is still out on whether the track records will yield results on Kellogg's decision. History has had a mixed bag whether we look at the separation of PayPal from eBay, or Kyndryl from IBM. Divestitures and spin-offs are just one tool for companies who have lagged overall market performance, or that of their peers, but in the end, it's the strategy and execution that must be there to ensure that all entities are stronger apart then they were together.
On this Day in 2001: The Robert Hanssen Case
Yesterday, I was reminded on LinkedIn that yesterday in 2001, the State Department notified the Russian government that four of its diplomats were considered persona non-grata and immediately jetted from the country (National Counterintelligence and Security Center, 2022). This is an older piece I wrote on Robert Hanssen, in particular.
During the transition period from the Soviet Union into the Russian Federation, an FBI agent, Robert Hanssen was handing over top secret, and code word clearance intelligence to the Russian authorities at the KGB and its successor, the SVR (FBI, 2017). Hanssen was using drop spots around the Washington, DC area from 1987 to 1991. After the Soviet Union fell, Hanssen backed off his acts of espionage. In 1999, Hanssen felt comfortable enough to relay information to a different set of agents put in place by the newly elected Russian President, Vladimir Putin (FBI, 2017). During this second string of conveying information, Hanssen was careless. The FBI knew that an agent was relaying classified information to the Russians.
The FBI preceded to offer a former KGB agent, $7 million for information about who the insider was (Johnston, 2002). Though he could not provide a name, a former KGB agent gave up fingerprints that were located on the outside of bags in which documents were delivered. Surveillance of Hanssen began and he was caught taking classified information out of his office. As Hanssen made his last drop in Vienna, Virginia, he was surrounded by multiple FBI agents and taken into custody (FBI, 2017). The arrest was made on February 18, 2001. Rather than the death penalty, Hanssen made a deal to detail his actions and why he betrayed the country and agency (FBI, 2017). Claims were made that he was providing for his family because he was passed over for promotion.
Purpose & Motives
Hanssen’s motivation for dissemination of classified information was purely financial in nature. He accumulated over $600,000 from his acts and was promised up to $800,000 upon the continuation of his activities (Ragavan, Glasser & Barnett, 2003). Hanssen had financial problems as he was at the top of the FBI pay scale, yet was highly in debt due to mortgages and private school for his children (Defense Human Resources Activity, 2003). Decades passed as Hanssen sent highly classified information to Russian agents. In this respect, the FBI failed to catch on to these movements, especially when he was tasked with finding the mole who was himself (Defense Human Resources Activity, 2003). The FBI, CIA, and law enforcement failed to work effectively with each other. It was only after decades, missteps, false accusations of other agents, and countless blame that Hanssen was successfully captured. Had cohesion between the agencies been collaborative, the damage to the intelligence community and national security would be minimal.
Failures & Lessons
Questions were raised in 2003 regarding how Hanssen could give classified information to the Russian government for so long. In previous years, he was cited for security breaches and in the 1980s, Hanssen mishandled classified information (Lichtblau, 2003), which did not raise flags within the FBI. As a result of flying under the radar, he was able to give up some of the US’ most classified information including nuclear secrets and intelligence sharing tactics. Despite these infractions on his record, Hanssen was successfully able to reach promotions (Lichtblau, 2003). Fatal errors were made. Loopholes such as the lack of oversight and insufficient polygraph examinations may have been to blame. It is noteworthy that due to the Robert Hanssen case, the FBI now polygraphs its employees upon hiring (PR Newswire, 2013). While it is taken for granted today, this was also a flaw in the FBI procedures among hiring new personnel and agents. Hanssen gave the names of over fifty agents who were recruited within Russia, and delivered information regarding flaws in US communications satellites (PBS Newshour, 2002), in addition to giving out technological secrets about US capabilities.
The FBI underwent damage control as Hanssen was arrested. Questions were raised regarding how he could have been continuing to disseminate information to a foreign entity for over fifteen years. A blue-ribbon commission was set up at the FBI to determine what happened and how to catch other moles within the agency (McGeary, 2001). As a result of this commission, the FBI set up a counterespionage division within counterintelligence in order to monitor suspicious activity within the agency and with outside contacts (Office of the Inspector General, 2007). An internal database was also developed to collect financial information on employees, polygraph test results, and background check data in order to successfully track suspicious activity (Office of the Inspector General, 2007). Written and practiced procedures needed to be developed to prevent FBI moles from being allowed to act in the first place.
Conclusion
The Robert Hanssen case is an example of one of the highest caliber espionage acts in United States’ history. As a double agent, he was successfully able to pass along classified information to Russia for decades. A lack of oversight at the IC, and hubris within the organization allowed such national security damage to be conducted for this period of time. Had it not been for pure chance of Hanssen leaving a trail that the FBI was able to follow, chances are he would have continued his actions for the foreseeable future. In this pre-9/11 era, the recommendations in transforming the intelligence community had not yet been put in place or conceived. However, a reliance on the blue-ribbon report from the Office of the Inspector General did make various recommendations such as databases of employment information that can be shared, requiring all employees to be polygraphed upon employment, and financial background checks to negate the opportunities for coercion or cooperation with a foreign entity.
References
Defense Human Resources Activity. (2003). Hanssen: Deep inner conflicts. DHRA. Retrieved June 15, 2017, from [www.dhra.mil/perserec/...](http://www.dhra.mil/perserec/osg/spystory/hanssen.htm)
FBI (2017). Robert Hanssen. Famous Cases & Criminals. Retrieved June 15, 2017, from [www.fbi.gov/history/f...](https://www.fbi.gov/history/famous-cases/robert-hanssen)
Johnston, D. (2002). FBI paid $7 million for file on American spying for Russia. New York Times. Retrieved June 15, 2017, from [www.nytimes.com/2002/10/1...](http://www.nytimes.com/2002/10/18/us/fbi-paid-7-) million-for-file-on-american-spying-for-russia.html?rref=collection%2Ftimestopic%2FHanssen%2C%20Robert%20Philip&actio n=click&contentCollection=timestopics®ion=stream&module=stream_unit&version= latest&contentPlacement=9&pgtype=collection
Lichtblau, E. (2003). FBI failed to act on spy despite signals, report says. New York Times. Retrieved June 15, 2017, from [www.nytimes.com/2003/08/1...](http://www.nytimes.com/2003/08/15/us/fbi-failed-to-act-) on-spy-despite-signals-report-says.html?rref=collection%2Ftimestopic%2FHanssen%2C%20Robert%20Philip&action =click&contentCollection=timestopics®ion=stream&module=stream_unit&version=l atest&contentPlacement=2&pgtype=collection.
McGeary, J. (2001). The FBI spy it took 15 years to discover one of the most damaging cases of espionage in U.S. history. An inside look at the secret life, and final capture, of Robert Hanssen. Time. Retrieved June 15, 2017, from [content.time.com/time/worl...](http://content.time.com/time/world/article/0,8599,2047748,00.html)
Ragavan, C., Glasser, J., & Barnett, M. (2003). The Traitors. U.S. News & World Report,134(3), 66. Retrieved June 15, 2017.
Office of the Inspector General (2007). A review of the FBI’s progress in responding to recommendations in the office of the inspector general report on Robert Hanssen. Retrieved June 15, 2017, from [oig.justice.gov/special/s...](https://oig.justice.gov/special/s0710/final.pdf#?)
PBS Newshour. (2002). Damage assessment: Convicted spy Robert Hanssen. Retrieved June 21, 2017, from [www.pbs.org/newshour/...](http://www.pbs.org/newshour/bb/law-jan-june02-hanssen_5-10/)
PR Newswire (2013, October 2). Witness to history: The investigation of Robert Hanssen. PR Newswire US. Retrieved June 15, 2017.
Apple's Newest Diversification: Business Essentials
Much like your own investment portfolio, companies continually must diversify to negate the ebbs and flows of their business segments. Up until 2001, Apple, then still called Apple Computer, was primarily a hardware company. The iPod was released and forever changed how the music industry conducted business. In 2007, Apple then released the iPhone, knowing it would cannibalize it's own iPod dominance in the market Followed by the inclusion of the App Store, then eventually iPads would see the light of day, all while still selling their signature computers to "prosumers".
As Apple continued to reinvent itself every few years, the stock grew from a split-adjusted $0.31 per share to around an all-time high of $157 earlier this year. Most forget the 4:1 split back in August of 2020, only its 5th in history. Flooding the market with product all paid off for the company as supply chain expert and now CEO Tim Cook, knew exactly how much product to create and how to vertically integrate the stack of hardware and software.
Late Summer, Apple released the newest iPhone 13. For the first time in a long time, several Apple fans chose to pass on the latest edition due in large part to the iterations becoming anemic. Slowly over the years, Apple has transformed itself from over-relying on the iPhone from 70 percent of revenues in 2015, to just about 50 percent today. For that, we can thank the M1 MacBook series, the Apple Watch, AirPods, and other various hardware. The one segment that we overlook for Apple's diversification strategy moving forward is its software.
In comparison to iPhone, iPad and Mac, Apple’s services revenue has increased year-on-year. Some analysts see it as the most important segment of the company, potentially reaching $50 billion in profit by 2025.
Business of Apps
Apple's software services such as AppleTV+, iCloud, Apple Arcade, Apple Fitness+ and more, are all a supplement to those that Apple has captured in the past 20 years into its ecosystem. This is where the next generation of revenue and diversification comes from.
Last week, the company announced a new service aimed at small and medium sized business that are a total part of the ecosystem, where the customers use Macs, iPhones, and iPads to conduct business upon, called "Apple Business Essentials". The service begins at $2.99 per device per month. It's designed to be a supercharged AppleCare, per say, where the business owner or IT department can keep tabs on each Apple device for employees.
Apple Business Essentials allows users to control what Apps and settings are available on each of the employee's devices. An added bonus will be additional iCloud storage included with each subscription: 2TB on the highest-end tiers. By no means is this revolutionary, but it is one more step towards cementing the diversification of Apple's services business.
Apple Business Essentials is a free app to get the apps and support you need from your company, all in one place. The Essentials app is automatically installed when you sign in with your Managed Apple ID that’s enrolled in Apple Business Essentials device management.
Apple Support
Microsoft's Office 365, and Google's Google One also have entry level business products that have the added benefit of increased cloud storage. As storage becomes less expensive as more servers are built worldwide, this will be a basic add-on product to most SaaS products as the cost becomes negligible to software companies.
Business Essentials is available as an App for iOS, iPad OS, and macOS, now in beta and expected to be generally available in early 2022. Unlike Microsoft, Apple has yet to make headways into the business community with an array of SaaS services. This move, while a small encroachment, may signal a significant move for Apple to provide ecosystem services to its customers to create lock-in, to fuel continual hardware sales Apple strives to maintain its lofty quarterly earnings reports.
Google's Tensor: The Data Company's Data Chip
Ever since the release of the iPhone in 2007, Apple has designed and fabricated its own chips for its own devices. At the time, "owning the supply chain" or the vertical, was the way of controlling the full stack of hardware down to software of the manufacturing and distribution process. Since then, we have seen the practice known as economies of scale, for Apple to make more revenue on each phone sold.
An unrealized benefit at the time is that the creation of a device makers own chips, also allows for unique customization and experimentation of SoC's to differentiate themselves from each other. Other examples include Samsung utilizing its own chips overseas, Microsoft's SQ series chips in the Surface Pro X, Windows ARM offering, and the newest entrance: Google's Tensor chip, which is the focus here. What's important to take away from these examples is the ease of which the manufacturer owns both the hardware and software stack, so in theory, components can become efficient and more intelligent with the entire device. Outside of mobile, we see Apple bringing the same concept to its laptops with the M1 series.
As the focus of this blog turns to all things data, the Tensor chip is the most interesting and dynamic from the aspect of AI and ML on the new Pixel 6 and Pixel 6 Pro devices. This is not a site for phone reviews, so I will not stray into its review, but rather what the Tensor's specifications and future is for Google.
Throughout the pandemic, Google was slow to release innovative instances of its Pixel line. Likely due to chip shortages and creating a SoC from scratch, Google entered the market last month with the Tensor, which is unlike anything else on the market, for better or worse. Since Google, itself, is not a semiconductor company, nor does it utilize fabs, they have chosen Samsung to produce the final product.
Functionality such as improved speech recognition, voice typing, live translate, and magic eraser to remove photobombers from pictures are all based on AI.
ZDNet
As the Tensor based devices offer these differentiations on product use, one of the often-forgotten benefits to a AI/ML blend is the security on the SoC. The Titan M2 component on the die, allows for hardware based security that will ideally stop any attacks aimed at the device itself; i.e. brut-force entrance by bypassing the fingerprint sensor.
Google's first Tensor device will learn from your habits and make suggestions based on usage to save battery, utilize Automatic Speech Processing, and the magic eraser to get rid of those unwanted background intruders on your photos. Out of all of the Pixel's features over the years, the place where the Tensor SoC really shines is its computational photography.
Other cool camera features thanks to Tensor include Magic Eraser, a feature that erases unwanted objects or people from photos. This feature uses Google's ML to do the task of what somebody would need to do on Photoshop, but in an instant.
Tom's Hardware
Rather than optics like a traditional camera, Tensor utilizes it's AI components to fill-in areas that are dull, or even missing. In theory, the machine learning component of the SoC, will allow features like Magic Eraser and Face Unblur to improve over time based on individual usage trends.
Given that this is the first generation Tensor SoC and Google is primarily a data company, this type of component is its core competency. Though Google is famous for deprecating or ending previous products like Google Wave, and Google+, research intensive projects such as SoC design and implementation is not something that can easily be explained away as a "failed software product". Hardware costs much more to develop in any technology company's R&D department.
The rumor mill is already circulating about the next generation Pixel, presumably the Pixel 7, with the next generation Tensor stack. It would make sense given all of the data and usage Google has collected from the first Tensor chips and their usage, and utilize that to improve the AI & ML on future devices.
Disclosure: I own the Pixel 6 and use it as my daily driver.
Primary Source Material is Crucial for Facts & Research
Back in college we all had access to those often bulky, hard to use research databases, that sometimes worked, but often steered us in the wrong direction. We had to teach ourselves Boolean operators to properly navigate them. There was a reason other than torture for utilizing those; to help us all find primary source materials to write our research papers.
As is often the case, that's about the only time we used academia, raw data and studies to conduct our analysis on any given topic. Unfortunately, many of us negated those skills in our everyday lives. We only read tweets and not the accompanying story bylines and don't question it. We turn on cable news for answers. As soon as we rely on others to conduct the analysis, we lose control of what is fact and what is not.
For the purposes of this post, I'll be looking at the continued importance of primary source for conducting our own research for whatever we desire. I'll point out what to search for, how to do so, and how to read misleading studies and research.
Stated before, this should be a refresher from college or even high school, yet we forget such things in the era of social media and cable news. Bias is at a premium, and this should be your first factor when looking for a source to research a claim.
We will use the example of historical context of the following: A database may contain a personal letter from John F. Kennedy or Richard Nixon urging to sway the constituent their way to vote. While this is a primary source and is unique and certainly has its place in history, it is quite biased and should not be used to factcheck, unless the piece is a part of a larger historical research project, per say.
Let's take the example of a major economic number; the monthly non-farm payroll report from the Bureau of Labor Statistics. This is the primary source for all data relating to U.S. employment, unemployment, wages, including a break down where the jobs were gained, lost, and why. The method for collection is survey. Since COVID-19, the BLS has also factored in the ways in which the survey takers communicate their situation.
The response rate for the household survey was 75 percent in September 2021. While the rate was lower than the average before the pandemic of 83 percent for the 12 months ending in February 2020, it was considerably higher than the low of 65 percent in June 2020.
Bureau of Labor Statistics
We must keep in mind as the response rate returns more to normal levels, that there still may be some slack in respondents, creating a larger margin of error (MOE) in responses. As sample sizes decrease, the chance for skew increases. Though this is a primary source, keep in mind any data deterioration that may arise as the survey was collected, in this case for the month of October 2021.
Continuing on the document, the BLS talks about the misclassification issue. Surveys are meant as a point-in-time Continuing through the document, the BLS talks about the misclassification issue. Surveys are meant as a point-in-time and simply not capable of handling entire population sizes. Technically, if an employee is "on leave" due to COVID, they are not considered unemployed, thus, a misclassification has taken place.
If the misclassified workers who were recorded as employed but not at work for the entire survey reference week had been classified as “unemployed on temporary layoff,” the unemployment rate would have been higher than reported.
Bureau of Labor Statistics
Given that COVID-19 was a once-in-a-generation situation, statistical measures can be improved upon moving forward if any other possible disruptive events occur. Like all Data Scientists, hypothesis must be carefully created, methodologies are more important on a national level such as the BLS, and the data requires further refinement and consultation as to what "voluntary leave" or "furloughed" means if these become larger data points in surveys going forward.
An economic number may not be what it seems on a headline or in an article posted in CNBC, Bloomberg, NYT, or Fox Business, for example. Their job is to get clicks and engagement (positive and negative). It's how these sites and companies boost ad revenue in a world where Facebook and Google dominate the online ad market. It's your job to question where these claims came from, to consider what the bias may be, and to retrace the steps to obtain a deeper understanding of what the numbers are "really" telling.
Though it may seem a bit absurd, we all must be capable of basic data science when it comes to understanding the headline. False claims and skewed articles run ramped in the age of social media. Older publications have unfortunately fallen into the same category as they race for clicks and their own share of the ad market. Leave your own biases aside when considering what to think after reading questionable content. Do your homework, as it were. The true comprehension of the story will come through and you can inform others why these pieces may have gotten the story wrong.
One Year at a Non-Profit
Volunteering has always been a part of my personal ethos. I'm reminded of my time as an AmeriCorps VISTA at Engaging Creative Minds, which I spent a year from 2017 until 2018. I wrote this post some years ago, but it is an experience that I hope that all younger folks will take advantage of.
In creating my plans long ago, the thought never came into my mind about actively working for an educational non-profit for a year through the AmeriCorps VISTA program. One year has passed and I have fulfilled my experience and dedication to this organization. The background I possess has ramifications far beyond non-profits, but also learned quite a bit about this different type of structure along the way.
My mind works as a project manager; processing ways to try new methods and procedures quicker, failing faster, yet creating a bread trail that paves the way for others not to make the same mistakes. Learning and failing is okay, if it’s never been tried. With a Six Sigma eye, every moment of productivity moves through my brain on a filter. Planning two or three steps out to measure potential outcomes is paramount in any organizational success. Non-profits are no different.
Agile Atmosphere. Often many are working on long-term projects, operations, finance, and outreach. Non-profits have quick turnover, yet a dedicated base of volunteers, funding mechanisms, and grants. Financial and non-financial players demand the organization be open, especially to those most inclined to visualize success. Documentation is a key salient point for enduring successes. Moving quickly through what does not work allows for successful pivoting through multiple strategies.
Experimentation. Reaching towards the next internal goal is vital to expanding organizational reach to the community and to funders. How to get there takes more creativity than personnel may be used to. Not being afraid to try and fail still plagues the mindset in for-profit or government organizations. With non-profits, this thinking must be a way of life; as if organizational survival depends on it, because it does.
Scrum. In smaller organizations, there may be one or two figure heads in a department, and that is all. Departments often depend on one another to see through a strategy to its full implementation. Creating strategies and cross-functional teams, or pairing individuals, to move through plans is the only way to ensure cohesiveness. After the formulation and implementation phases, all must be brought in to be briefed on what’s next. This eliminates duplication of tasks, efforts, and allows for more frequent but quicker meetings regarding potential roadblocks.
This unique experience allowed important insights into how the multibillion-dollar non-profit sector works. The larger takeaway is that all organizational behavior only slightly differs between government agencies and the for-profit sectors. The missions and goals are the same, however only the stakeholders differ. We all serve a vast yet similar set of stakeholders throughout our lives.
A Reminder of Intelligence Leadership and National Oversight
As the Intelligence Community (IC) works to rebuild trust between its agencies, it's citizens, and other partner nations, this is a great time to remind us all about the basics of how leadership within the IC and it's ongoing struggle with national oversight through checks and balances. I originally wrote this piece in October of 2018.
Setting the Scene
It is the role of Congress to monitor and create oversight of the intelligence community. Congress checks for the abuse of power within the seventeen agencies including the ODNI and considers abuse of power by the other two branches of government. The House Permanent Select Committee on Intelligence (HPSCI) and the Senate Select Committee on Intelligence (SSCI) are the two primary governing bodies that provide oversight of American intelligence (Rosenbach & Peritz, 2009). As the Executive Branch sets foreign policy and intelligence priorities, Congress is to be kept ‘fully informed’ of large-scale intelligence activities set forth by The White House (Rosenbach & Peritz, 2009). This procedure is mandated by the National Security Act of 1947.
The laws of oversight and leadership become complicated as competing interests are tasked with budgeting, making intelligence law, enacting recommendations, and following through on programs. Each stakeholder has their own interest. These interests often clash as the oversight of IC has turned into a power grab by congressional committees, as well as the executive and judicial branches. The politization of intelligence matters, lack of congressional oversight, and the increase of influences by the executive branch all must be addressed to ensure the best possible leadership and oversight outcomes.
The Politization of Intelligence Matters
Politization of the IC's recommendations, outcomes, and funding does not just stem from political parties, but rather expediency, groupthink, and biases that may exist within Congress or in the Executive Branch. Mark Lowenthal, a career intelligence expert, states that analysts alter assessments to support policy, lawmakers influence intelligence analysis outcomes, a history of cognitive bias on the part of analysts, and lawmakers often ‘cherry pick’ analysis to support their own ideals (Tomes, 2015). Such destructive habits undermine the autonomy and independence of the intelligence community. Additionally, given the secrecy in which the IC operates, dissent is often dismissed, and whistleblowers may be silenced as a result.
Former Secretary of Defense, then CIA Director, Robert Gates said of politization of intelligence in the 1990s, “deliberately distorting analysis or judgments to favor a preferred line of thinking irrespective of evidence” (Gleeson, 2017). Another issue with politization deals around false information, group think narratives, and different ways of categorizing intelligence threats based on a single individual rather than involving a process of many (Gleeson, 2017). Utilizing estimative probabilities in intelligence assessments is also an obstacle to politicization of intelligence through Congress. The goal of estimative probabilities is to reduce the amount of uncertainty when analyzing information (Friedman & Zeckhauser, 2012).
Terms such as probably, likely, certain, somewhat certain, et al., are open to numerous interpretations by Congress, and therefore can be politized as wanting intelligence to say what is politically convenient over what may be occurring. Estimative probability is vital to assessing viable alternatives to an action. Analysis of competing hypothesis (ACH) is the core of intelligence (Friedman & Zeckhauser, 2012). The IC must conduct this analysis free and independently of out all outside influence. However, when this analysis is presented to congressional committees or the White House, options may be chosen based on their own interpretations of risk, an open definition of estimative probability, and what is politically expedient.
Lack of Congressional Oversight
A long-term debate exists as to whether congressional oversight over the IC is too intrusive, counterproductive, and political; or does not go far enough to keep program scopes and civil liberties in check. As Congress represents the American people, they must also strike a balance between doing what is necessary for national security and explaining their votes and policies to their voters. Loch Johnson, a University of Georgia professor stated, “September 11 was an intelligence failure, but it's also a policy failure, not only in the White House but in Congress. There's really a heavy onus on these intelligence committees to probe what's going on” (Priest, 2004). Politization of intelligence often comes at the expedience of national security, considering real threats, or what may be convenient for the next congressional election.
Former Ohio Senator, Mike DeWine, stated that the learning curve is quite large for those serving on the congressional intelligence committees (Priest, 2004). The failure or lack of understanding of policies and procedures within the IC is too cumbersome, especially to learn in one Senate or House term. DeWine recommends a restructuring to make oversight more “user friendly” for the average Congressperson to understand and convey to constituents and the agencies themselves. A solution to this problem must involve revisiting the 9/11 Commission Report and further streamlining dissemination techniques now that most of the recommendations have been implemented on the Federal, State, and local levels. Lastly, the barriers between Congress and the intelligence community must be further siloed from political influence.
In the event congressional oversight committees were to consolidate into a few bodies, the Executive Branch would have more influence as would the specific Congressmembers on the new committees. Also, the politization of individual nominations to several intelligence agencies would become a problem, further blending in the independence of intelligence.
Executive Branch Influences
Prior to September 11th, the Executive Branch was able to reduce congressional committees overseeing intelligence from eight to two over the course of the 1980s (Halchin & Kaiser, 2012). As a result, it also reduced the number of Congressmembers who can receive information requiring a clearance. As Congress is not briefed as frequently as the Executive Branch on intelligence matters, it is difficult to tell what may constitute a national emergency or crucial military exercise (Marshall, 2008). Congress must rely on the Executive Branch to disseminate information to the relevant committees (Marshall, 2008). This is an example of how the Executive Branch obtains more power over intelligence than that of its Legislative Branch counterparts. The White House having more power than Congress over oversight lessens the efficiency of intelligence and is an argument for an ever-expanding Executive Branch with its powers.
In the 2006 midterm elections, the Democratic Party won over the House and Senate in part to the backlash of the War in Iraq, from the Republican Party and President George W. Bush. That was not enough to stem the tide of the war, as Bush repeatedly persuaded Congress to expand the War in Iraq even though a majority of his own party opposed the operation (Marshall, 2012). The overreach of executive powers is evident in this case, as a mistrust was built among the Bush Administration leading up to the initial operation in Iraq in 2003.
As a President submits their yearly budget to Congress, the Office of Management and Budget (OMB) is primarily responsible for this task (“An Overview of the Intelligence Community”, 1996). As an expansive Executive Branch continues to take hold, the effect the OMB has within influencing Congress on intelligence budgets becomes more important. Considering Presidential appointments within the intelligence community, crucial roles such as the CIA’s Inspector General, the heads of seventeen intelligence agencies, and other vital oversight roles are submitted by the President and approved by Congress (“An Overview of the Intelligence Community”, 1996). Such powerful and influential hand-picked nominees are often non-controversial; however, Congress does not normally take the time to investigate the backgrounds of the individuals, but rather take the Executive Branch at its word.
Wrapping it Up
Executive overreach and influence are not going to decrease in the foreseeable future. Congress must be able to put more checks and balances between itself and the Executive Branch in order to look unbiasedly at intelligence reports prepared by various agencies. As all of the United States’ intelligence agencies are bound by law to follow the Constitution and are subject to oversight (“Accountability and Oversight”, n.d.), it is important to note that oversight is the right intention and Congress should continue to do so. However, issues such as an increasingly powerful Executive Branch, the polarization of Congress, and the lack of general oversight all contribute to failures in leadership and effective intelligence.
As Congress is elected by the American public at-large, it may be prudent to inform all members of the bodies of basic national security issues. Confusions and a larger than normal learning curve may hinder this understanding by most members not on the HPSCI and SSCI. The HPSCI and SSCI will still be able to retain more sensitive information, however, all members of Congress must understand the bare minimums of developments domestically and internationally. It is ultimately the public who benefits from national security and intelligence services. A Congress that is better able to explain to the public, what it is the IC does, is one that is better able to make decisions when it comes to electing individuals who will in-turn create stronger oversights and laws to prevent abuses of this information.
Endnotes
Accountability and Oversight. (n.d.). Government Publishing Office. Retrieved October 24, 2018, from [www.gpo.gov/fdsys/pkg...](https://www.gpo.gov/fdsys/pkg/GPO-INTELLIGENCE/html/int018.html.)
An Overview of the Intelligence Community (1996). Retrieved October 24, 2018, from [fas.org/irp/offdo...](https://fas.org/irp/offdocs/int023.html.)
Gleeson, D. (2017). The high cost of politicizing intelligence. The Atlantic. Retrieved October 24, 2018, from [www.theatlantic.com/politics/...](https://www.theatlantic.com/politics/archive/2017/02/the-high-cost-of-politicizing-intelligence/517854/)
Halchin, L. & Kaiser, F. (2012). Congressional oversight of intelligence: Current structures and alternatives. Congressional Research Service. Retrieved October 24, 2018, from [fas.org/sgp/crs/i...](https://fas.org/sgp/crs/intel/RL32525.pdf.)
Marshall, W. (2008). Eleven reasons why presidential power inevitably expands and why it matters. Boston University Law Review, 88(505). Retrieved October 24, 2018, from [www.bu.edu/law/journ...](http://www.bu.edu/law/journals-archive/bulr/documents/marshall.pdf.)
Priest, D. (2004). Congressional oversight of intelligence criticized. The Washington Post. Retrieved October 23, 2018, from [www.washingtonpost.com/archive/p...](https://www.washingtonpost.com/archive/politics/2004/04/27/congressional-oversight-of-intelligence-criticized/a306890e-4684-4ed4-99a0-c8ae7f47feb7/?utm_term=.3031174d278e)
Rosenbach, E. & Peritz, A. (2009). Congressional oversight of the intelligence community. Harvard Kennedy School. Retrieved October 22, 2018, from [www.belfercenter.org/publicati...](https://www.belfercenter.org/publication/congressional-oversight-intelligence-community.)
Tomes, R. (29 September 2015). On the politization of intelligence. War on the Rocks. Retrieved October 22, 2018, from [warontherocks.com/2015/09/o...](https://warontherocks.com/2015/09/on-the-politicization-of-intelligence/.)
Ditching Instagram: Focusing on Meaningful Connections
Yes! You heard it here first. Like all of you, I was excited for Instagram when it first hit the scene back in late 2010 and still had my original account from that time. Meta (formerly Facebook) famously purchased the business for $1 billion and successfully integrated it into its ad network and social graph, but I'm not here to relive or debate history -- we can save the positives and negatives for another post.
This is not to bash the platform, nor criticize those who use it to build their business, brands and outreach. I do not have those needs. Mine was a personal account that I spent way too much time "doom scrolling", searching for vanity likes, outreach, and engagement. Personal accounts should not be used for this purpose as it adds no value, and frankly, grows into one big time constraint.
If you are a former reader of mine, you'll notice one big advantage thus far -- I'm posting a blog. Not a LinkedIn snippet or repost from BlueSky but writing an actual post which I have not done in quite some time. My annual domain registration and WordPress bills are coming due, and I want to take the time this year to build out my writing and reach through conversations, not vanity contests.
We must also consider mental health. In recent decades, one's well-being in this field is taking more seriously than it ever has been, and to different folks, that means different outcomes. For me, the question is -- what could I best be spending my time on for my skill set, career, and helping others? These values are important to me, and Instagram dopamine hits were not contributing meaningfully to those values.
So, what will I fill my days doing? I plan on working on evolving my personal networking techniques, read more (whether its audiobooks, eBooks or good old-fashioned tree-killers); and working on posting when and where it matters. I plan on making meaningful contributions to other publications to extend my reach and expertise.
The advances in AI of the past two-years have really made me reflect on what platforms and mediums are meaningful and whether they help or harm the cause. Again, I should write many more posts on that topic, and likely will. I need to learn more and talk less. Pushing out photos and media that feels "forced" is not a strategy worth pursuing.
If you would like to follow in these footsteps, I've included a link on how to delete your Instagram account. Be careful, re-logging in during the 30-day window will reset the timer and you'll have to start the countdown over again. Your mileage on taking this action can and should vary. I'm looking forward to using my new-found time to create longer, researched, in-depth posts and being confident enough in what I conclude to post on the platforms I still utilize.