Over the past year, I've been working closely with diverse organizations across Chicago, guiding employees and leaders on how to effectively—and responsibly—leverage generative AI. In virtually every session, I've stumbled upon a curious yet alarming reality: employees eagerly using freely available AI platforms with minimal, if any, oversight. Even more concerning, sensitive personal and proprietary company information regularly makes its way onto these unprotected public tools—often lacking essential security measures like two-factor authentication or enterprise-grade data protection.
In the IT Risk Management space, this is called “Shadow AI,”; Chicago’s executives must step back and initiate frank, transparent conversations about what responsible AI usage genuinely entails. Too frequently, employees hesitate to disclose which AI tools they use or the tasks they're automating, fearing repercussions in the face of absent or unclear internal policies.
Beyond risks, employee reticence and silence also stifle learning, inhibit transparency, and ultimately risk valuable data. Leaders now have a timely opportunity—indeed, a responsibility—to clearly articulate their organization's stance on AI, craft thoughtful usage guidelines, and create an environment of trust where successes and lessons learned are openly shared.
Chicago, renowned globally for its strength in finance, healthcare innovation, robust manufacturing sector, and thriving tech ecosystem, faces real risk if businesses allow unmanaged AI practices to continue unchecked. The consequences can be severe: data breaches, intellectual property loss, compromised client confidentiality, and even reputational harm to our city's image as an innovation powerhouse.
Recent MIT Center for Information Systems Research research underscores that outright bans on employee-driven "Bring Your Own AI" (BYOAI) initiatives inevitably fail. Workers, driven by legitimate productivity goals, will always find workarounds. Instead, the practical question for Chicago executives becomes: How can we responsibly integrate generative AI into our workflows while maintaining stringent data security?
Drawing from my hands-on experience across sectors, I see four practical steps organizations can adopt immediately to guide responsible AI use:
First, clearly defined guardrails are essential. Too often, employees unintentionally breach privacy standards or expose proprietary information simply because clear, company-wide AI guidelines don't exist. Chicago’s strength lies in its diverse talent pool—experts in technology, compliance, privacy, and law—perfectly positioned to draft robust, practical AI policies collaboratively. Instead of a blanket ban, companies should consider introducing thorough, accessible guidelines. Chicago businesses could also build cross-functional teams to craft and swiftly implement transparent guardrails, balancing cautious oversight with ample room for innovation. It can be as simple as an AI usage policy detailing which platforms, which licenses, the role of company data, and enterprise licenses on ChatGPT, or as complicated as the full rollout of ISO certification under ISO 27001 for artificial intelligence systems.
Second, cultivating "AI direction and evaluation skills" is critical. I've observed Chicago professionals trusting generative AI outputs unquestioningly, even in critical scenarios like strategic planning or sensitive internal communications. This blind reliance introduces unnecessary risk. Companies need to consider significant investments in training to help employees learn how to critically evaluate, refine, and validate AI-generated insights. Regular, structured "virtual training hours" can ensure hands-on practice under expert guidance—enabling employees to spot inaccuracies and harness AI more effectively. Chicago companies can quickly build what I like to call opportunities for "AI-enablement in context," which connects the dots between the daily workflows of high-value employees and the limits of what's possible with current AI tools. Such a model leads to skilled and critical AI users capable of enhancing their impact without compromising accuracy or confidentiality.
Third, sanction a curated selection of trusted generative AI tools. The root issue I’ve seen isn't that employees are leveraging AI but rather that they're doing so using inappropriate platforms, posing unnecessary risks. Organizations must proactively identify and endorse secure AI solutions that meet enterprise-grade security standards, such as encryption, role-based access, and multi-factor authentication. Companies can create curated approved solutions where staff can access pre-vetted, secure tools along with training resources. While adopting licensed, secure AI tools may carry initial costs, research consistently finds that these investments pay dividends in productivity gains, risk mitigation, and employee satisfaction.
Fourth, build a culture of transparency around AI usage. One of the biggest barriers to effective AI adoption isn’t technical—it’s cultural. Employees won’t share what tools they’re using or how they’re using them if they fear being penalized, viewed as expendable, or accused of stepping outside undefined policies. Similarly, many executives hesitate to disclose how they themselves are using AI to make decisions, evaluate performance, or develop KPIs. But silence creates silos, and silos stifle innovation.
Executives must take the lead here. When leaders openly share their own experiences—what’s worked, what hasn’t, how AI has helped or surprised them—it sets the tone for an organization where curiosity is welcomed and learning is shared. Establishing forums for feedback, creating cross-functional AI roundtables, and embedding AI usage reflection into existing team rituals are all ways to normalize this dialogue. Transparency not only helps spot risks like shadow AI usage but also enables the cross-pollination of great prompts, novel use cases, and workflow enhancements that can raise the bar for the entire organization.
The key is alignment: Imagine employees at every level fluent in advanced use of cutting-edge AI solutions, driving productivity and efficiency gains while minimizing risk with responsible use of generative AI and traditional AI solutions. All while boards and executive leadership teams provide clear policies that fuel innovation while protecting data in partnership with IT teams. When the workforce and leadership move in unison on AI, the whole region stands to benefit.
Looking ahead, Chicago can secure long-term economic and technological leadership if it builds an ecosystem of trusted AI use, shared standards, and transparency. Achieving this will require a concerted effort across companies and industries, but the payoff is immense: a future where Chicago sets the pace in the AI-driven economy. For a city that famously makes no little plans, this is a significant opportunity. The city can emerge as a national leader in smart AI policy, workforce enablement, and innovation. In doing so, Chicago would turn a potential risk into a model of best practices for the country.