The critical role of security and compliance in managing applications and data environments is undeniable. Deploying a GenAI platform differs significantly from a standard Infrastructure as a Service (IaaS) setup, particularly in terms of data access rights and control. According to research by TechTarget’s Enterprise Strategy Group, even companies that initially favored cloud solutions are opting to keep some workloads on-premises. This decision is driven by concerns about data governance and sovereignty (42% of respondents) and security issues (34%), reflecting a deep awareness of the risks of data exposure with GenAI technologies.
Determining who has access to corporate data for specific purposes should be a top priority. Data secured behind a firewall is generally well-protected, enhancing the benefits of a private cloud system. However, consumer-level AI like ChatGPT, which operates in a fully public domain, can jeopardize even these safeguarded setups. ChatGPT is considered “shadow AI” because it operates outside the oversight of the IT department. Each instance of an employee using ChatGPT without permission can become a security risk.
Consumer-grade, web-based AI like ChatGPT stores all input data for learning purposes, which can include sensitive or proprietary information. The Samsung incident is a prime example: Samsung engineers mistakenly used ChatGPT to review their confidential code and other private data, which is now irretrievably online. This case underscores the necessity of safeguards to prevent the careless use of online AI tools.
Imagine you’re hosting a private party at your home. You get to decide who walks through your door, right? That’s kind of like keeping your data on your own servers – you’re in full control. But now, think about throwing that party at a fancy hotel managed by a big company. Sure, it’s still somewhat exclusive, but you’re relying on their security to keep unwanted guests out. That’s what enterprise-cloud software by big tech companies is like. It’s a middle ground; it’s secure, but you don’t hold all the keys. The real takeaway? If you want the ultimate say on who sees your data, keep both the data and the control of the service firmly in your own hands.
When we talk about building IT systems, security should always be our top priority. That’s why it’s crucial to choose partners who really get how important keeping our applications and data safe is. Dell Technologies has made some pretty smart moves with their PowerEdge servers, bringing in some cutting-edge security features that are spot-on for today’s needs
The Dell Technologies approach to security is intrinsic — built in, not bolted on, and integrated throughout each PowerEdge server’s lifecycle — from design, to manufacturing, use and end of life. This approach of “bringing AI to your data” for the long term is what sets Dell apart. Its zero-trust architecture presumes the network is always vulnerable to compromise, so it safeguards access to critical data and resources by:
1. Assuming every user and device represents a potential threat.
2. Applying the principle of least privilege to restrict users and their devices.
3. Applying multifactor authentication models and authorization rights that are time-based, scope-based and role-based.
4. Deploying models on premises and leveraging retrieval-augmented generation.
5. Leveraging a secure supply chain.
6. Authenticating and authorizing each communication with the infrastructure.
7. Avoiding inherently trusting any entity. Verification is required to access all assets
Dell PowerEdge servers are also designed so that unauthorized BIOS and firmware code can’t run. Basically, if the server can’t verify that the BIOS is legitimate, it shuts down and adds a notification in its log so that IT can initiate a BIOS recovery process
All new PowerEdge servers use an immutable, silicon-based root of trust to attest to the integrity of the code running. If the root of trust is validated successfully, the rest of the BIOS modules are validated by using a chain of trust procedure until control is handed off to the OS or hypervisor.
In conclusion, in the era of Generative AI (ChatGPT or any other) , safeguarding organizational IT environments is not just beneficial—it’s essential
Reference
Nice work George! Your comparisons between cloud-based and on-premises data control is quite interesting, highlighting the necessity for companies to maintain control over critical data. Your thorough analysis of Dell’s PowerEdge servers
clearly demonstrates how incorporating inherent security features can provide a solid basis for protecting IT infrastructures in the age of artificial intelligence.
Nice post, George! Your post definitely come with more insights to let us know How can organizations address data security concerns around GenAI, such as leakage with the use of public AI models, bypassing IT oversight with shadow AI tools, and data sovereignty with on-premises solutions that ensure robust control and security? How can companies embrace other sets of approaches, such as intrinsic security, zero-trust architecture, and root-of-trust measures, put in place at for example in Dell Technologies to protect sensitive information and keep systems integrity intact?
Excellent observations! George
The distinctions between IaaS and generative AI systems have been skillfully outlined by you. The idea of “shadow AI” is essential, particularly in light of recent events like Samsung’s that highlight the dangers of unmonitored AI use. Dell sets the bar high with their proactive security strategy with PowerEdge servers, especially its zero-trust design. Security must be given top priority in the age of generative artificial intelligence! I appreciate you sharing!
Great Insights, George! The article does call out the growing need for responsible AI usage. You have called out the example of Dell’s proactive approach and how enterprises are considering this for safeguarding their data. But I do think there is some level of responsibility that must be on the AI platforms themselves. Striking a good balance between competitive advantage and ethical responsibility is crucial – especially if we want to see more AI adoption in highly regulated sectors like healthcare. If AI platforms can demonstrate robust security, compliance, and data governance measures, industries such as healthcare will be more willing to embrace these technologies to enhance patient care.
You’ve highlighted the importance of on-premises data security over cloud-based solutions when it comes to GenAI technologies. However, is this feasible given the current technological advancements and conditions? The Samsung incident have shown us the risks of mishandling sensitive data, emphasizing the need for greater awareness among developers. Dell PowerEdge servers provide effective solutions with their security features like zero-trust architecture and immutable BIOS validation.
Nice post George. The importance of maintaining data governance and security controls cannot be overstated, especially in the age of generative AI. Dell’s commitment to zero-trust architecture and intrinsic security measures truly sets a standard for the industry. As AI continues to evolve, prioritizing data protection while enabling innovation will be key to sustainable growth.
It’s great to learn about Dell PowerEdge Servers. The fact that unverified BIOS and firmware code can’t run, thus preventing rootkits, is an often underemphasized feature. However, I’m a bit skeptical about how it verifies the root of trust for the BIOS. Does it have immutable root certificates that ensure security?
It’s great to learn about Dell PowerEdge Servers. The fact that unverified BIOS and firmware code can’t run, thus preventing rootkits, is an often underemphasized feature. However, I’m a bit skeptical about how it verifies the root of trust for the BIOS. Does it have immutable root certificates that ensure security?