Dr. Matthew D Gonzalez
// TALK: Next-Gen Security: Roadmap for the Next Generation
// MISSION_OBJECTIVE (ABSTRACT)
Join us on a brief update on how the UC Cyber Security Programs continue to strengthen our proven foundations, to engineering the future of defense. The University of Charleston is setting the standar...
Join us on a brief update on how the UC Cyber Security Programs continue to strengthen our proven foundations, to engineering the future of defense. The University of Charleston is setting the standard in cybersecurity education. We aren’t just teaching tech, we are forged to train the next generation of cybersecurity warriors. By blending elite technical skills with decisive leadership, UC is arming students to neutralize imminent threats and dominate in a fast-paced digital battlefield.
// DEPLOYMENT_HISTORY (BIO)
Dr. Gonzalez started as an IT Intern, and grew his career as a Developer, Systems Analyst, IT Architect, IT Project Manager, Chief Data Officer, and Department Chair and Professor. Dr. Gonzalez has 25 years of experiences: - Presenti...
Dr. Gonzalez started as an IT Intern, and grew his career as a Developer, Systems Analyst, IT Architect, IT Project Manager, Chief Data Officer, and Department Chair and Professor. Dr. Gonzalez has 25 years of experiences: - Presenting research at Harvard University - Performing training at the Pentagon - Awarded by the National Diversity Council - Gaining the trust of I.T. professionals world wide
Ian Frist
// TALK: Bring Your Appetite: Aligning Cybersecurity Risk with Enterprise Strategy
// MISSION_OBJECTIVE (ABSTRACT)
Cybersecurity risk isn’t a side dish—it belongs at the head table of enterprise risk management. In this session, Ian Frist, Director of Governance, Risk & Compliance at Corning, explores how organiza...
Cybersecurity risk isn’t a side dish—it belongs at the head table of enterprise risk management. In this session, Ian Frist, Director of Governance, Risk & Compliance at Corning, explores how organizations can stop treating cyber risk as a siloed technical concern and start integrating it into their broader risk appetite framework. Using real-world stories from the trenches (without naming names), Ian will unpack the consequences of misaligned appetites—where security teams over-restrict or under-protect due to unclear enterprise priorities. He’ll challenge the common misconception that cybersecurity risk appetite is just about controls and compliance, and show how it’s really about leadership, business context, and strategic clarity. Attendees will leave with: • A practical framework for aligning cyber risk appetite with enterprise risk appetite • Tips for communicating risk appetite across technical and non-technical stakeholders • A fresh perspective on how to “serve” cybersecurity risk in a way that satisfies the whole organization Whether you’re a seasoned GRC leader or just pulling up a chair to the risk table, this session will help you bring your appetite—and leave with a full plate of actionable insights.
// DEPLOYMENT_HISTORY (BIO)
Ian Frist is a cybersecurity leader with a strategic focus on IT risk and compliance across global operations. As Director of Governance, Risk, and Compliance at Corning, he leads a worldwide team navigating complex regulatory and cybersecurity frame...
Ian Frist is a cybersecurity leader with a strategic focus on IT risk and compliance across global operations. As Director of Governance, Risk, and Compliance at Corning, he leads a worldwide team navigating complex regulatory and cybersecurity frameworks including CMMC, NIS2, China CSL, TISAX, NIST CSF, and Sarbanes-Oxley. Ian brings a unique perspective shaped by experience in both the private sector and government, enabling him to bridge operational realities with strategic oversight. He holds a Master of Science in Cybersecurity from the University of Charleston and completed the Chief Risk Officer program at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy. Ian is a CISSP and maintains a portfolio of additional industry certifications. He also serves as an Adjunct Professor at West Virginia University and volunteers as Vice President of Enterprise Risk Management for the Mountaineer Area Council of Scouting America.
Brett White
// TALK: The Sovereign Stack: Defense in Depth for the Self-Hosted, AI-Powered Smart Home
// MISSION_OBJECTIVE (ABSTRACT)
Most cybersecurity guidance assumes you have a team. A SOC. A vendor. Someone else handling it. But a growing population of engineers, researchers, and practitioners are operating production-grade inf...
Most cybersecurity guidance assumes you have a team. A SOC. A vendor. Someone else handling it. But a growing population of engineers, researchers, and practitioners are operating production-grade infrastructure entirely on their own — running dozens of containerized services, a fleet of IoT devices, and increasingly, local AI agents capable of acting autonomously on that infrastructure. What does serious security look like when the buck stops with you? This talk presents a practitioner’s end-to-end architecture for defense in depth across three interconnected layers: the homelab foundation, the smart home edge, and the AI intelligence layer — drawing from real-world implementation rather than theory. In the first act, we examine network segmentation philosophy for self-hosted stacks running Proxmox, Docker, and zero-trust overlay networks. We cover the most dangerous assumption homelabbers make — the flat network — fine-grained access control group design, secrets management, and why “Cloudflare tunnel plus Nginx Proxy Manager” is a deployment strategy, not a security posture. The second act turns to the IoT edge: the most chaotic, least-auditable layer of any home network. We examine what Zigbee, Z-Wave, and ESPHome actually provide in terms of authentication and encryption — and what they do not — the OTA firmware trust problem, and how a single compromised sensor can become a persistent foothold. A real smart infrastructure deployment covering power monitoring, environmental sensing, radar presence detection, and audio is used as a concrete attack surface case study. The final act addresses the emerging challenge that makes all of this more urgent: adding local AI agents that can act on your infrastructure. We explore why cloud AI is a non-starter for a privacy-first stack, how to architect a local-inference multi-agent system using open-weight models and scoped tool access, and the new threat vectors this introduces — prompt injection, MCP tool misuse, and agent privilege escalation. We close with a framework for treating AI as infrastructure, with all the security discipline that implies. Attendees will leave with a threat model, an architectural philosophy, and concrete implementation patterns applicable to any environment where a single operator is responsible for the full stack.
// DEPLOYMENT_HISTORY (BIO)
Brett is a multi-business owner, computer science professor at the University of Charleston, and homelab practitioner with a decade of experience building self-hosted infrastructure at production scale. His work spans zero-trust network architecture,...
Brett is a multi-business owner, computer science professor at the University of Charleston, and homelab practitioner with a decade of experience building self-hosted infrastructure at production scale. His work spans zero-trust network architecture, containerized service orchestration, IoT and home automation security, and local AI systems, with a consistent focus on privacy-first, open-source solutions over cloud dependency. He teaches technology coursework at the university level, runs a technology-focused podcast covering self-hosting, digital ownership, and privacy, and operates an extensive homelab running over thirty self-hosted services across Proxmox, unRAID, TrueNAS, and Docker environments. He has designed and deployed multi-agent local AI architectures using open-weight models, and leads multiple technology initiatives at the intersection of education, self-sovereignty, and emerging infrastructure.
// PREVIOUS_EXPERIENCE (NOTES)
Host of events such as the Raspberry Jam WV and Wild & Wired WV, and speaker for many events across the east coast and midwest about self-hosted applications, security, and open-source software to encourage ownership and digital sovereignty.
Host of events such as the Raspberry Jam WV and Wild & Wired WV, and speaker for many events across the east coast and midwest about self-hosted applications, security, and open-source software to encourage ownership and digital sovereignty.
Vincent Smith, Trang "Moon" Bui, and Maria Albores
// TALK: Evaluating Information Leakage and Persistence in ChatGPT, Gemini, and CopilotÂ
// MISSION_OBJECTIVE (ABSTRACT)
Large Language Models (LLMs) are used by millions of users worldwide to perform various tasks and have become a valuable tool for both individuals and businesses. Consistency and cybersecurity are two...
Large Language Models (LLMs) are used by millions of users worldwide to perform various tasks and have become a valuable tool for both individuals and businesses. Consistency and cybersecurity are two of the major concerns for LLMs’ evaluation in order to increase the trustworthiness of their behavior. This study focuses on testing the security and consistency measures of the three most popular AI chatbots (i.e., ChatGPT-5, Google Gemini 2.5 Flash, Microsoft Copilot). We hypothesized that the output persistence would be high and the same within the models and across models, and all models would not leak the information that users input to other users through prompts. Twenty fabricated business plans, of which half were defined as “confidential†strings, were developed and input into the three LLMs. Extraction of the information was attempted every twenty-four hours for two consecutive weeks. Results were analyzed using the Jaccard similarity index to compare the outputs between LLMs, identify differences over time, analyze response patterns, and identify any risks in the leaking of information. Over time, all models demonstrated a low level of output consistency, indicating a poor behavior dependability assessment. The outputs of the three models were similar in content, but the expressions were different amongst them. Regarding security, it is unlikely that there was a leakage of confidential information.
// DEPLOYMENT_HISTORY (BIO)
Vincent Smith, PhD is Program Director of Data Analytics and Computer Science and Assistant Professor of Data Analytics at the University of Charleston. He holds a PhD in Data Science, an MA in Psychology, an MA in Mathematics, and a Master's level c...
Vincent Smith, PhD is Program Director of Data Analytics and Computer Science and Assistant Professor of Data Analytics at the University of Charleston. He holds a PhD in Data Science, an MA in Psychology, an MA in Mathematics, and a Master's level certificate in behavioral statistics. His research focuses on applied AI, algorithmic bias, human–AI interaction, and real-world data analytics, with multiple peer-reviewed publications in the International Journal of Advanced Research and related journals. Dr. Smith is also the founder of Horse Creek Cabins, where he applies data science to customer experience. He is an i3 award-winning researcher and a frequent speaker on artificial intelligence and analytics across West Virginia.
// PREVIOUS_EXPERIENCE (NOTES)
• I3 Winner for Faculty/Student Research – 2025 • Selke, P., Smith, V. (2025). The Effectiveness of Translator Apps and AI. I3 Presentation. • Gohil, H., Smith, V. (2024). An Analysis of Algospeak and the Dread...
• I3 Winner for Faculty/Student Research – 2025 • Selke, P., Smith, V. (2025). The Effectiveness of Translator Apps and AI. I3 Presentation. • Gohil, H., Smith, V. (2024). An Analysis of Algospeak and the Dreaded Algorithm. Int. J. of Adv. Res. DOI:10.21474/IJAR01/19684 • I3 Winner for Faculty/Student Research – 2024 • Hunkele, C., Smith, V. (2024). An Analysis of the Accuracy and Bias of a Generative AI Model. Int. J. of Adv. Res. DOI:10.21474/IJAR01/18640 • Garrett, M., Harwood, A., Shamblin, J., Smith, V. (2023). YouTube Transcripts Word Frequency Measure. J. Linguistics Culture and Communication. DOI:10.61320/jolcc.v1i2.91-99 • Hoffman, B., Smith, V. (2022). An Analysis of Utility Company Customer Service during the COVID-19 Pandemic. Int. J. of Adv. Res. 30 (Sept.) 10-09. DOI:10.21474/IJAR01/15394 • Smith, V. (2021). AEW and WWE’s Wednesday Night Wars: An Early Analysis. Professional Wrestling Studies Journal, vol. 2, no. 1, 47-60. • Smith, V. (2020). Differences in Adult and Child False Memories Based On Different Types Of Associated Words. Int. J. of Adv. Res. 8 (May). 09-15.