Security Exec Track: What CISOs May Miss About Mobile – Ed Amoroso’s Hard Truths and Strategic Advice

 

Session Description 

 Join this fireside chat for CISO-level insights on the complexities of mobile app risk – including the importance of mobile as an attack vector, AI, your organization's security ecosystem, and more.

Session Summary

  • Mobile app vulnerabilities, not devices or OS, are the biggest blind spot in cybersecurity today.

  • Quantifiable, risk-driven security practices are essential to prioritize mobile app risks effectively.

  • Compliance frameworks underrepresent mobile app risks, necessitating operational awareness and practical threat modeling.

  • Mobile app security challenges are compounded by geopolitical complexities and require a divide-and-conquer strategy.

  • AI is transformative but uncertain; cautious, informed adoption prevents overinvestment and misaligned priorities.

  • “Shift left” security—integrating security early in development—is critical to preventing mobile app threats.

  • Overconfidence in Apple and Google’s OS protections has lulled industry into underprioritizing mobile app security.

     

Session Transcript

 

 

Well, welcome everyone. I'm thrilled to be joined by Dr. Ed Amaroso, former CISO of AT&T, CEO of TAG, and one of the most respected minds in cyber security. Ed's known for asking why. I think our relationship started with why. And he says, why do we do what we do in security? And he's always pushing industry for measurable risk driven practices. Today, we're going to talk a little bit about risk and about best practices. And I'm really excited to have you join us, Ed. It's nice nice to nice to be here. I guess I should ask why'd you ask me here? I'm just kidding. No. Well, we figure we'll have a lively conversation. Um, so just jumping right into it. You've long championed the idea of protecting critical infrastructure and in the world of mobile as mobile apps take center stage across a variety of industries, healthcare, finance, your former industry, telco, government, where do you see the biggest blind spots in mobile mobile risk today? And what are the systemic risks we're missing? Well, we all called it wrong. Everybody thought in the beginning that what would happen in mobile would be sort of like a little mini version would happen with our PCs. You know what I mean? We thought, all right, look, I have PC, I have all these viruses on it. I got to go buy McAfee and Semantic to keep the viruses off the OS. And we thought, all right, we just little version of that have malware on the It didn't work out that way. It turns out it's a mobile apps is the problem. Now, the irony is we had problem with apps on our computers as well, but it's more pronounced in mobility. So, just about everyone listening to me right now, they probably living off their mobile apps. And if we ask them about what are the mobile security issues they worry about, their mind immediately goes to the device, the OS, I think they're thinking about the wrong thing. It's the it's the like if we were politicians, we'd say it's the app stupid probably. But it happens a lot, Brian, in our industry where we're sort of trained to see patterns and we assume that what happened before is going to happen again. But it's just a little different in mobility. Maybe because um Google and Apple did a little bit better job with the with the operating system code. That's certainly a good possibility. But when there's some app developer banging out something for a bank or social media company or gaming or whatever it is, they're not applying all that much better discipline than we'd seen over the last 20 years, which in many cases pretty bad. So blind spot would be around the apps. Not totally, it's getting better, but that would be my read that when we first started looking at this, we developed some bad habits because we thought let's it would just be the same and it's not. It's very different. Yeah. I think that leads into uh you know the next question which is sort of you know you've always been pushing for this concept of quantifiable security and if you think about it so as a follow on um in this mobile app e ecosystem um how should we be measuring and prioritizing this risk and and against all the other priorities that are faced with with the CISO today it's so funny like Brian you and I can say that sentence asked that question it's a simple question how do you quantify and normalize risk I mean what could be simpler wow Wow. A lot a lot could be it's a very very complicated problem like um you know first off you want to be able to compare and rationalize your mobility risk with other aspects of your program. And I would say that empirically it would be reasonable to say we went back and looked at most of the attacks that have occurred in the past. They've been a fishing and other things and it any rational person would say man you got to go fix that. and all the attention swung very quickly toward email security and endpoints, traditional endpoints. That's why Crowd Strike and others are such powerful companies. But I think now what's happened is that we've made so much progress there that we've gotten to the point where if you did a an honest quantification, like if you and I really sat down and went through the factors, you know, probability of attack, the consequence, man, your mobile apps are they're they're getting up there. I mean some businesses it's a gigantic problem you know others it's contextual like again your question about how do you how do you do this it's going to be different in this kind of industry versus that or this size company versus that or this region of the globe versus that all those are factors but I would say that the mobile app risk is is definitely increasing it may even be that it was always there but we're noticing it more now I mean certainly we use mobile apps more so the the consequence could increase. But if you said, gee, it was the code great before and it's terrible now. It might be that a lot of mobile apps were terrible before and they're still terrible. We just started notice depressing concept there. But I would say that that would be terms of quantification once you start damping down the risk in one area and good for us, good for our community. We've done pretty good job in a number of areas, but I think mobility and in particular mobile apps been a blind spot. Well, I know you spent a lot of time talking to uh your network of CISOs and informing them and uh encouraging to think differently. if you were um what risk questions you think CISO should be asking their mobile teams as a sort of thinking through this that they may not you know the mobile app teams and you have do you have any perspectives on that? Yeah, you know, there's a few there's a million different ways that this could be answered. One there there, like you mentioned, I have this ongoing sort of relationship with a bunch of different u CISOs and their teams. That's the business we're in a tag. You know, we provide guidance to them, help them rationalize. So, first let's start with one type of CISO and that would be maybe a more compliance-minded CISO. We see that a lot where they're driven by frameworks. The problem with that, I think you probably agree, is that I think the frameworks have under attended to the mobile app risk. So if you're guided by a framework that made some decision, then you're going to you're basically hitching your wagon to that and you know like um NIST, you know, the 800-53 probably we could could be dialed up a little bit in terms of the mobile risk, mobility risk in general, mobile app risk in particular. And I've shared that with Ron Ross and others over the years said, you know, I think you guys did a great job, but here's a place that's a little weak. So, one answer, Brian, to your point is if they're very compliance-minded, then they might be led to believe that this isn't as big a deal because it hasn't been as pushed. That doesn't mean it's not a big deal. Just mean push. So, that's problem one. If you're more operational-minded, I think you're probably a little bit more aware of the problem here because I I it's just between you and me. Nobody's actually listening to us here. I can tell you a secret. So, I guess maybe others will be in on it. I always use these risk platforms, but the way I preferred to do risk was to get my team together and write it on a napkin. It's proverbial. meaning, all right, we got these dashboards and it looks great and I have all these charts and they're going to go to the board, but now can we turn off all the computers and just what are the few things that get us all fired, you know, and and that's the real discussion. And I got to tell you that for a lot of companies, a pretty significant hack that involves their mobile apps probably makes that list. You know what I mean? Not everybody, but a lot more than it used to be. And arguably everybody would see a pretty significant rise. So I've been out, you know, telling everybody, hey, this is you're all looking for places where we should be, you know, improving our risk posture. This is a place we've probably under attended and need to do a little bit more work. Now, the problem is it's this vast complex ecosystem that we've allowed to emerge, you know, with different country risks. I mean you guys for example found some problems with the um Deepseek as I recall. It was excellent piece of research. I wrote about that. I'm very impressed with that. But look at all the complexities involved in that. You got geopolitics in there and you just have flatout politics in there, right? You have data being shared back with you know I think it was bite dance was getting data and country all this intrigue and complexity with one app. you know, an important when doing AI. So, wow, man. There's a lot to this. So, that's I don't know if that's bad news. It's probably good news if you're, you know, now security. There's a lot of work for you. But for CISOs, it's bad news in the sense that this ain't going to be easy. This is one that we have to tackle probably by dividing and conquering. Some of the CISOs will start by figuring out how to fix easy, less important apps. Others will dive right into the most important ones with the most critical data. I don't know which is right. Whether you learn on the easy ones and then you do I I don't know. It depends. But um complexity is a problem here. It's this is not going to be an easy fix. So hey, that's why CISOs make the big bucks, right? To go That's great. Having to make the hard the hard choices. And I there are plenty of them. And I I think you point that out. It's where do you start and how do you compare this risk to all of the other risks that they have to deal with and those aren't going away either and those are the complexity is not getting any easier. But this is a space where you would hope people would spend a little more time because it really I mean that's the coaching I provide and I think I think it resonates. You know it's rare that I find myself in collisions with um CISOs about certain types of things. I mean, we're all sort of in lock step that um there's a few areas that um are under attended to, a few areas that are over like one that's over attended to right now is probably AI that um it's coming. But when people say, "Oh my gosh, I have to go buy an AI security tool." We always say to support to deal with what risk and then it's this future vision of what the risk could be. And I said, "Well, could it be something else?" Well, yeah, it could be. So then my coaching is usually then don't sign any three-year deal like try something try it but if you think next year could be something different. It's like that like sometimes you over attend based on hype and other times you under attend simply because it hasn't been as prominent as it should be. So it's CISA is a tough job and it's not it's not for fainthearted individuals. Yeah. Well, Ed, you me you mentioned compliance a moment ago and um I'm just wondering you've also talked about the risks of over disclosure and you just mentioned AI. So, I'm wondering, you know, how do you see AI impacting compliance? Um, does this help simplify risk or introduce a whole another layer of uncertainty? And AI is a simple two letters, A and I. And boy, it is a comp it's complex. It's you're using it. There's the whole idea of generative AI and you know we talked about Deepseek you know there's a there's an AI mobile app that we're dumping data in. So you know this is not really a well-formed question but it's a big chunky topic. So Oh it's a good question. I mean look AI is the most transformative technology I've seen in my whole career and I'm an old dude man. I've been doing this for 43 years. But the problem is we don't know how. We know it's going to be big just can't pinpoint how. I mean, is it primarily about reducing cost? Is it primarily about introducing new services? Is it primarily about replacing manual with automated? Is it primary I could go on and on and on. And then if you're a if you're selling in this environment, you'd go yes to all of them. If you're if you're trying to get funding, you know, then you better pick one or a few of those that make sense. And sadly, I've seen an awful lot of business cases driven primarily by I reduce cost. Now, it's funny, you know, I work in some of these CISO groups like u sometimes they're one where they're they're very they pay quite a bit of money to be part of a group. These elite groups, they pay and I I sometimes I get free uh membership in these things. So, I'm sort of red meat at those places, but I like to go when it's people who are paying a lot of money. They have a warped sense of things and when you ask them how are you doing on your AI initiatives they're all funded and it's money and they're going to do but when you go to normal you know everyday living breathing CISOs who can't afford to spend a bunch of money to be part of a discussion group and their budget is tight you ask them they have to be a lot more specific it's like hey I want to invest money in AI the answer is going to be what we started our thing with why you know what is it we're doing and it can't Well, you know, we might lose data or IP through our gateway. And it's like, well, why aren't you using our proxy? And you go, I guess I could. Well, somebody might use a co-pilot to find data that's sitting in our enterprise somewhere. And you say, well, don't we have a data security initiative to fix that? And go, yeah. Before you know it, you're really questioning, what is it I'm specifically trying to do? Or gee, the model might hallucinate. And you go, well, Anthropic and O Open AR are worth more than planet Earth. Shouldn't they be testing their models? Why are we It goes on and on and on. So, it's not clear yet what the role is for the security team. But I will say here's a few things that are absolutely certain. If you're technology vendor, you'd be nuts not to embed AI because it's awesome and I, you know, I'm a computer scientist. I teach at NYU. I'm at this is what I do. So the idea of embedding models and then and intelligence is awesome. It really really does turbocharge the way things work. So for technology builders, technology providers, you have to use it. If you're a practitioner, I would just say make sure that you're doing something sensible, right? That sensible means if things are changing quickly, you hold off. Like when it's snowing, blizzarding out, do you go out and snow in the do you shovel in the middle of a blizzard or do you wait for it to kind of settle down so you get a sense? Am I getting six inches or two feet of snow? What are we dealing with here? Maybe you go do something in the middle. But you get the point. Like when things are changing so dramatically, we always say participate, be engaged. By all means, make your your voice heard. you can influence things, but you when things are changing this quickly, you really don't want to just jump to the end of the story because you don't know what the end of the story is. That's for practitioners. Again, if you're if any any tool that I buy now or anyone that I recommend, I want to know what you know what is the what's the AI story? What is how is AI being used? How are you dealing with it? Because that's how we all learn and that's how we build out a collective understanding of what I've been talking about. Like where's the movie end? we all have to figure that out and and and define the end of the movie together. So that's where we are right now to end of the middle of 2025. But you know, as we move toward the middle to end of this year into next year, we're still in that kind of PO POV, what's the plan here phase with AI and business for for cyber security. Then wow, I know exactly what we need to do here. Anybody who tells you that is wrong. They don't know. They might want to know but they could be wrong. So when you when you think about this uh like when we get back to risk you know what's have you know obviously we have to see when the when the blizzard what's what's really going to turn into that's just how AI is applied and we think about from the the CISO role the security practitioner you know any thoughts on quantifying the the risk associated with you know AI and I'm thinking specifically in mobile and you know generative AI there's both the aspect that hey we're going to let AI help us build lots of code and we're going to deploy it on mobile apps and we're going to be able to collect lots of data. Any thoughts on the risk? So many I mean it's so wide open that it's hard to even say hey here are the three things there's like here are the 50 things right like for example when we measure risk of AI doing a task you can't measure it to perfect you have to measure it to what we do now say you mentioned coding right I mean I'm a computer scientist I've been in around software engineering my whole life my dad was an early early computer scientist the chair of software engineering department at university so my life so when you say do we trust AI to write code? I would say compared to what comparing it to the developers in like the best firm in the world that are careful and do amazing code then I don't know you compared to like some of the mobile apps you guys look at where things tend to hope this doesn't u like come out or land hard on anybody but some of the code is slopped together let's face it like compared to that I'll take the AI any day of the week as would you you know So it's like compared to what you know there's that. Um, when you measure risk, I I think what I always do, like that napkin thing I said, what I always do is I want to know what's my risk right now, this minute. Then what's my near-term risk? And then what's my risk some number of months or years out like now, near moderate term, long term, who knows? Long term, you know, maybe we'll robots take over. But like immediate term, here's an example. Let's say you are a u a creative group where you do I don't know like brochures for a living for people and you've been in business a bunch of years a little shop people come in you make brochures for birthdays for businesses for this for that whatever let's say that's been your business and then you go oh my gosh chat GPT does this great and you used to have two interns and a writer and you the owner so now you say hey interns I don't need you anymore. And the full-timer is now part-time. So now it's you and a part-timer. You've cut your payroll down and you just sit on chat GPT all day long. Now the risk to that is what if Chat GPT was under a denial of service attack for two weeks. Now they haven't been and I'm assume they have a CDN probably using AI or something like that. Cloud. Let's say that it did. You're kind of out of business because you've transformed your whole business. I mean, today that risk that's not near-term. That's not like right now if somebody said, "Oh my gosh, you know, u somebody Open AI just had big board fight. They shut the whole place down." Now, granted, you'd go run to Anthropic or you'd go find some other tool, but the but this risk is either immediate. What am I using right now? What could happen? What security problems could be a problem? That's usually less interesting than sort of the near-term thing. Meaning, hey, we are making this transition. Let's say you're a farmer and it's kind of amazing. Like farming, you and I would immediately think of, you know, somebody on a horse or a tractor and they're around and big piece of grass in their mouth. I grew up in New Jersey, so that's my cartoonish image of a farmer. But nowadays, farmers like the highest tech thing ever using satellite imagery to survey the landscape and decide where you should planting and irrigating. You've got autonomous equipment that's out doing it's amazing. It's all controlled like by computing. Suddenly, you know, their risk goes way up. If there's problems in the code, if there's some issue with the control center, whether if the AI runs a muck, you could have a problem right now, you know. So, it's kind of interesting how that works. But for people listening, I would just say there's these fancy risk platforms, and I like them all. There's a lot of good ones. I recommend them. You need them to deal with in a business setting. But make sure that you do the napkin thing where you just set aside and say, "Look, I got all this complexity around all these dashboards." If I'm sitting having lunch with my CEO and the CEO says, "Hey, Ed, what are the two or three things we're worried about?" I better not say, "Well, let me go look it up on my dashboard or here, let me pull out these PowerPoint charts." It better be that you know what they are. Well, look here. Like when I was in Telkom, it was the iPhones don't work anymore. network's down, can't connect. That's problem one. Problem two, lose all the data, you know. So, it's stuff like that. Somebody comes in and controls the networks. Real simple, basic things. And then you work your way backwards from that to make sure that you don't have obligations that some vendors would refer to that as threat modeling. And I think threat modeling and proper risk analysis are very linked. But back to sort of the our original discussion around mobile apps, what I find is when you do that threat modeling and you think about what could be consequential to the business, increasingly mobile apps are finding their way into my analysis where maybe I don't know, I'm going to say 5 to 10 years ago, I probably also had somewhat of a blind spot because I was still thinking in those days about how, you know, mobile threat defense might be a much more uh prominent issue. issue than say mobile app and I think it's the reverse. Mobile app is bigger issue. Well, it's I mean I think that leads into a question I wanted to ask as well is so um imagine going back and becoming a CISO again today. Well, don't imagine too hard uh because you love what you're doing right now. But anyway, um you know when you when you think about that facing the the environment that a CISO is in the constraints, resources, the fast development cycles of mobile, um what mobile strategies would you prioritize first if you were going to look at that problem? I'd start at the source. I mean the stuff you guys do where you're helping the developers make sure that they're preventing things in the first place like a big shift left would be my first you know priority to figure out what are all the opportunities we have to shift left sometimes you don't right you may not have it but you always start there everybody goes you should have balance and I say ah it's a lot of hoie you shouldn't have balance you should start by trying to avoid you don't say I'm going to keep myself healthy by having a nice balance of eating healthy and also exercising and and and be knowing the phone number of the ambulance in case something happens to me. Well, if you ask me my choice, I'd rather be healthy than just focus on making sure I know where the ambulance is. You know, that that should probably come second. I mean, people doing incident response might be getting mad at me saying this, but boy, I much rather prevent things. And that's one thing I like about, you know, you guys do a nice job at now security. try to prevent this crap from getting in there. That is the best approach of all. So, if I was back in the seat, and again, we coach a zillion of them. So, I feel like I used to have one seat, now I feel like I have a hundred, but um with the folks that we we deal with and coach and support at TAG, um I have to admit to a shift left bias. I I I always felt like that was u that seemed appropriate to be if you're going to have a bias, that struck me as the right one. Sure. And if you're going to use your sort of napkin talk analogy and you were and you were with the bunch I like the napkin thing, right? Yeah. I love that. Um and if you were if you were sitting around uh you know the table with some of your CISO friends and you were trying to you know walk through that sort of the risk-reward of the investment in mobile security. What would go on that napkin? What would you say? You know what are the top couple of things that they should really think about as they're evaluating that? I mean dude data data leakage number one. Like it's like it's the apps are like a civ, right? You you guys have seen that. Yeah. I mean, it's invisible to all of us. You know, that's one thing about computer science. The concept of information hiding, meaning a app runs and you're not really sure what's going on. You push a button and this thing pops up and you go, "Gee, I love my computer. I hit a button. I get the screen and I watch a video." you know, you watch some YouTube video or something, but you have no idea what's going on behind that. That's the power of computing. So, I mean, that's the data leakage that's going on in there with the collecting data and sharing it with aggregators and sharing it with nation states. I mean, look, Brian, you and I, if I'm playing solitire game and somebody's collecting data, I'm probably kind of sort of okay with that. I mean, whatever, you know, like if you don't want to pay the paid version, you know that they're grabbing all kinds of crap. You are the you are the the the product in those cases. But if it's my banking mobile app, I think I do have an issue then, you know, then I'm suddenly thinking of all these scenarios like, wow, um, gee, I hope they're protecting this and that and this and that. You know, that that's what goes through your mind. So there's probably a direct relationship between what you would consider to be the material criticality or or importance of a given app to your life, your mission, your business, your agency, whatever whatever you're using it for. Um, with how much time and attention you should spend to worrying about uh things. But yeah, got like could somebody push a button and it's a kill switch and kills the mobile app. I I think most of us would go, it's not that you yawn at that, but it wouldn't be as important as somebody taking over the app and, you know, grabbing your data and maybe using it to masquerade or something. That's scary stuff. And yeah, it it directly correlates with the the the type of data and how important that app is to your to your life. Yeah. Well, I guess to sort of wrap things up and maybe almost circle back to the beginning, you know, we talked about the risks and and talked a little bit about um you know, the the way folks are looking at the the problem specifically around mobile. Why do you think it has been maybe dep prioritized or may maybe it's not intentionally dep prioritized but just everything else has come before it so in effect it's dep prioritized. What would you say about that? That's an easy one. It's two two words Apple Google you know it's not their fault it's just they've done kind of a nice job like sandboxing and the code has been so we've been sort of lulled into believing that you know those guys are doing they're taking care of security and they're they're not like they're they're making sure that maybe a buffer overflow is not going to happen on iOS but they're not properly again it's you could argue it is their job or isn't I say it's not their job I think it's better to have vendor ecosystem. Companies like Navse Secure should be embedded like we have Whiz, Palo, Aqua, Cyig and the cloud ecosystem. We need that same sort of symbiotic ecosystem for mobile apps, you know, between Apple and Google as the two primary app providers certainly here in the US. Um and having a vibrant ecosystem of um of vendor partners. That's what we need. But you know why did it happen? I think because we just put a little bit too much confidence in two very great companies. But each worth like what's bigger than a a quatt trillion, you know, that's what they're worth. I don't even know what the what whatever is the number comes after that, you know, that's what they're worth. So they've got a lot of money. We assume they're powerful. They're doing all this. They do a lot, but they don't do everything. And that's the reason been lulled. for another conversation. I know I have to continuously update my OS for features security. So, I kind of think about there's got to be something there. But anyway, uh just to wrap this up, Ed, for anybody that's listening, not right now, but maybe later, how can you and tag be helpful to them? What what's can you just give a little quick uh you know, how can you help others? We manage vendors like for enterprise teams, we have a cyber vendor management platform. you upload your spreadsheet and we run a whole series of analytics on your vendors and give you views that you don't have right now as you try to rationalize. And for um you know vendor partners, we try to help them tell their story. I have a new book coming out through Columbia University Press called Reaching the Chasm. It's the prequel to Jeffrey Moore's Crossing the Chasm and it's all about helping startups get to scale. So um yeah, hope follow me on LinkedIn. definitely follow. I have a new podcast on YouTube called Tag Talk and I hope people will subscribe to us on YouTube to help us with the YouTube algorithm. Excellent. Well, uh, Ed, thank you so much for your for your time, for your insights and most importantly your why and everything that you're doing out in the market to make the world a better place. So, we really appreciate you taking the time with us today. Have a great one. We'll talk to you soon. Thank Thanks, Brian. Bye.

 

16 results found