Welcome to Cyber Security Today. From Toronto this is the Week in Review edition for the week ending Friday, August 19th, 2022. I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com.
In a few minutes I’ll be joined by Terry Cutler, head of Cyology Labs in Montreal to talk about some of what’s been going on in cybersecurity. But first a look back at the headlines from the past seven days:
The Zero Day Initiative, which pays people to find vulnerabilities in software applications around the world, is tired of seeing bugs in security patches. Security patches are supposed to fix bugs, not introduce new ones. So it has decided bugs that result from faulty or incomplete patches will be publicly reported after 30 or 60 days of being discovered, instead of 120 days. One of its officials told Security Week that as many as 20 per cent of all the vulnerabilities it buys come from bad patches. These can come from some of the biggest names in IT including Microsoft, Adobe, Google, Oracle, VMware, Cisco, Apple, HP and Dell. Terry and I will discuss this.
Speaking of patches, Google this week issued a Chrome browser update that deals with one critical and six high severity vulnerabilities. In addition, Apple released emergency patches for iPhones, iPads and Macs.
Terry and I will also discuss whether companies are collecting too much data on their customers. This comes after an academic publication published research on the amount of data collected by medical-related companies and shared with Facebook for advertising and product lead generation.
And we’ll also look at a report that a North Korean group is pushing out a fake job description infected with malware for Macs.
Attacks that wipe organizations’ data completely are on the rise. That’s according to researchers at Fortinet. A review of cyber attacks from the first six months of the year shows at least seven new wiper variants are circulating. Many were identified hitting Ukraine before and after the Russian invasion in February. However, Fortinet says disk-wiping malware has also been used by threat actors this year against organizations in 24 other countries. Fortinet told one service that typically wiper-ware isn’t used by criminals. That suggests it’s a tactic used by foreign governments or activists.
Python application developers continue to be warned about malicious packages in the open source Python Package Index. The repository is more commonly known as PyPI. Researchers at Kaspersky found two malicious packages that could steal developers’ passwords. These two packages pretended to be a legitimate tool called ‘requests.’ The report is another reason for developers carefully check and scan anything downloaded from open source repositories.
And the Android app that subscribers to Amazon’s Ring home video security service can use recently had a serious vulnerability. That’s according to researchers at Checkmarx who found it. The hole could have allowed an attacker to get the names, email addresses, phone numbers and video recordings of customers. After being warned of the problem Amazon issued a fix in May. Amazon doesn’t believe the information of any customers was compromised.
(The following transcript of my conversation with Terry Cutler has been edited for clarity. To hear the full discussion play the podcast.)
Howard: Let’s start with the decision by the Zero Day Initiative to pressure companies to issue better security updates. For those who don’t know, the ZDI is a program run by security company Trend Micro to buy vulnerabilities discovered by anyone in application code. The idea is better a legitimate company buys the bugs than crooks. Trend Micro alerts companies that their software has a hole. The companies benefit by fixing the vulnerability and issuing a security patch. Trend Micro benefits by adding filters to its antivirus products that protect against the newly-discovered bugs. Trend Micro usually gives companies about 120 days to fix and quietly distribute a bug patch before it publicizes that there was a hole and that it’s been fixed. And that period also gives time to IT departments and end users to install the patch. Sounds great. However, Trend Micro is seeing a disturbing trend: An increasing number of vulnerabilities that researchers find are in previously-released software updates. In other words, because of sloppy work companies aren’t fixing everything in their fixes. So last week Trend Micro said the 120-day notice period is going to be shortened. Public notice is going to be released as soon as 30 days for critical bug reports that result from previously issued faulty or incomplete patches. If companies don’t want to be embarrassed they’d better do better work. Is this a valid tactic?
Terry Cutler: Let me give you my perspective from the days I used to work for a software vendor. Some of your listeners might know I used to work for Novell for 10 years as a primary premium support engineer. Whenever we released software a lot of times it got rushed out to keep up with the competition. But when code is not being done properly it really upsets customers and they start pissing on the vendor. Excuse my language. I’ll give an example.
Novell had what’s called a single user interface which means you couldn’t have multiple users logging into its console at the same time. But malicious hackers were able to find a way to bypass that functionality, which stunned the Novell developers. That was going to be revealed at a Black Hat conference, so we had about a day to try and reproduce this problem and find a fix or we’d look really bad. So they came out with a band-aid fix. But what could happen is that when patches get rushed out it breaks functionality at the customer site, and if you’re dealing with large environments it’s going to cause a lot of problems. For example, printer functions that were working before might not work anymore. Band-aid fixes could lead to a code rewrite, which means a large patch will be released at some point which could break other stuff. You never know what you never know what.
Howard: Trend Micro says that the problem is bad patches mean IT departments are losing their ability to accurately estimate the risk to their systems because IT and security teams have to choose a priority for installing patches. Bad patches also cost organizations money and resources as patches get re-released and re-applied.
Terry: I experienced that about five years ago with a Microsoft patch that was rushed out. One of the issues that really burned me was an intermittent problem where the Exchange Server would stop all email services at exactly every 12 hours. We could actually time it to the second it was going to happen. It took weeks of troubleshooting to fix. We had to bring in other tech support guys from other IT vendors to try and help us. Later on we found out that it was a [bad] Microsoft patch. Microsoft released another patch days later which corrected the problem, but we had to go and find this issue for them.
Howard: This is a quality of code problem. Organizations are supposed to have extensive processes to ensure that all the software and the security fixes are thoroughly scrutinized before being released So what should companies do to ensure the quality of their code before being released?
Terry: Developers aren’t coding with security in mind. What we’re seeing is that codes are being built for convenience and the developers are being rushed because of release dates. They have deadlines to reach and they’re not properly testing the code. One thing that they can do is follow a methodology from OWASP [the Open Web Application Security Project], where they can test the most common vulnerabilities in web applications against things like malicious injection, cross-site scripting or SQL injection before code is released. Unfortunately not all of them are going do these tests before things are released …. There has to be more testing on applications.
Howard: Right now ZDI is the group that’s putting pressure on software companies to smarten up with their code. Are there other ways that IT departments can let their vendors know that they’re unhappy with the quality of the security patches they’re getting?
Terry: Usually we don’t have access to the vendor source code, unless it’s an open source project and we can see what’s going on there. We’re at the mercy of the vendor to do a good job. It’s also very, very difficult for a vendor to release a patch that’s going to fit every single environment …
Howard: What about telling the vendor, ‘We’re tired of getting your buggy code. We’re switching products unless you smarten up.’
Terry: That’s a valid point. But here’s what goes on: Some of the code is written in India, other codes are written in Russia … There’s like maybe three locations where the code is being written. And when the code is passed on from group to group — because one group is working from 9 to 5, and when they go to bed Russia takes over — they try to fix each other’s code all the time. And if you have a junior guy fixing up the code and releasing it, there might be problems.
Howard: Topic 2: Is your company’s brand at risk for collecting too much customer data? The academic journal Patterns looked at the advertising tactics of five cancer-related health companies to see how much data they collect from things like trackers in browsers or from signing up for a health app. Why? because some of this data gets sent to Facebook, where it gets used for targeted ads and product lead generation. Lots of people with medical conditions turn to Facebook groups for information on their conditions. This makes Facebook a good place for companies selling medical products and services to push targeted ads. There are two problems: One is all the browsing data collected by the companies studied, plus other information gathered from things like online surveys, pose risks if the data isn’t stored properly There’s a risk that what’s supposedly anonymous data can identify individuals and then the stolen data could be used to push fake remedies to specific people. Another problem is Facebook or another social media site where people are going to could push inappropriate ads to people. Doesn’t this report suggest that many companies are collecting too much data?
Terry: It’s been said if the app is free you are the product. If they’re giving away free stuff [like health information] in their app they’ve got to find a way to sell you services or upsell you something The goal is to stay top of mind. But I agree if a company has too much data they can target you with fake ads, or somebody else can take advantage of that information and send you fake ads. Companies have to be more transparent.
I’ll give you an example. Folks use our Fraudster app. We have access to things like their location and email address, first name, last name. I can go on to Facebook and launch an ad campaign telling Fraudster members ‘Be the 10th person to buy the book and receive a gift.’ Companies need to be transparent about the data they collect and what it will be used for. Facebook is trying to fix that by creating a tab that explains why you are seeing an ad. But you may have gotten on someone’s like and you don’t know where an add is coming from
Howard: Consumers should realize is you don’t have to give your real name when you’re signing up for an app or filling in an online survey. That’s one way you’re going to be protected.
Terry: But there are things called tracking pixels in browsers. It gets embedded the moment your device visits a survey or a specific website. You don’t have to log in or even give away your email address or first name last name. If your PC or your device has that pixel on it you’ll see that ad.
Howard: What have you seen in your years in the IT industry of companies collecting and storing too much data on consumers?
Terry: The common thing I see is that once a data breach occurs there’s a lot of sensitive information, especially medical. That’s where a privacy commissioner is going to get involved. Usually fines or penalties will be given out, especially in the U.S. They take that much more seriously than we do. I don know if the newer laws are going to clamp down but I don’t see enough penalties here in Canada. Customers should be able to download their [personal]data so they’re able to see what the vendor of the company has on them. Google is a perfect example. Even Facebook lets you download your data so you can see what you’ve done [on its platform] and what they know about you.
Related content: Canada’s proposed new privacy law
Howard: One problem for firms is that publicly-released studies can damage the brand of the companies. This one named which companies were studied. Two of them allegedly didn’t even follow their own data privacy rules on what they collected. That’s got to be bad for their brands — and a lesson to other companies to be careful about what they collect, how much they collect and how they store it.
Terry: This is a perfect example of how legal and IT are out of sync. There are a lot of new functions or features and capabilities that IT makes available, and companies want to be the first to leverage them. But they don’t necessarily tell legal about them. Nor would legal even understand what IT did. There has to be better communication between these two departments.
Howard: It reminds me of a book written recently by Shoshana Zuboff in which he said companies are gathering and exchanging so much personal data that we’re in an age of surveillance capitalism. Do you agree?
Terry: There’s so much stuff being collected right now that it’s very, very difficult to hide on this planet. The problem is when information falls into the wrong hands and is used against you.
Related content: Balsillie calls for regulations to fight surveillance capitalism
Howard: Topic 3: Security firm ESET says the infamous [North Korean-based] Lazarus group is trying to hack people by publishing fake job offers from the cryptocurrency exchange called Coinbase. Presumably the group is looking for people with cryptocurrency experience to snoop on victims with Mac computers who click on the job description. They get infected. We’ve seen reports before about crooks and countries circulating fake job offers and fake resumes on Linkedin. I recall a story about fake job offers in the aerospace industry where the goal was to infiltrate military contractors to get information about current and future products. So individuals have to be careful about online job offers they see on the internet or that get mailed to them.
Terry: Last year I saw a ton of these circulating. In fact, we had to do an investigation on one of them. A retailer was in the process of mass hiring new staff because the pandemic was coming to an end. They need to bring back more staff, so they launched a campaign to hire staff. But scammers created a fake hiring campaign against retailer. Next thing you know people are applying to the scammers for jobs. The scammers would say, ‘We think you’re a perfect candidate for this, but you’re gonna have to get a computer,’ and send them a fake quote from IT reseller. The victim had to send money or bitcoin to buy the computer — but it was a scam. You mentioned also Linkedin. I’ve gotten a lot of job offers from random people saying, ‘Terry, I think you’d be a perfect fit for our board of directors.’ Attached to the email was a zip file or a document. It would probably be malicous. That’s where I think a lot some of the Macs were getting hit, because when victims open up the file and enabled scripts it downloads malware from the internet.
Howard: One message is individuals have to be careful about job offers that they see online or get sent to them. Companies have to watch job sites to detect and stop phony ads as well.
Terry: There are a couple of things companies can do. One is to set up Google alerts. Go to wwww.google.com/ alerts and type in keywords to search for, such as your company’s name. The moment a new site or an article or something comes online that has those keywords you’ll receive an email. A company can also set up domain name monitoring to look for any lookalike domains that might be created.