While there could be many definitions of what a connected vehicle is, following is how Wikipedia defines a “connected car”.
“A connected caris a carthat is equipped with Internet access, and usually also with a wireless local area network. This allows the carto share internet access, and hence data, with other devices both inside as well as outside the vehicle.” – Wikipedia
This post is part of a multi-series blog on security of connected vehicles. The focus of this post is describing Society of Automotive Engineers (SAE) definition of different levels of autonomy and exploring some stakeholders in the connected vehicle economy.
Is Connected-Vehicles the same as Autonomous?
Connectivity and autonomy are two different concepts. Most of the modern vehicles have some type of connectivity but not necessarily any level of autonomy. In many cases it is a matter of terminology about how people define a connected vehicle. Some may call it intelligent vehicle as well. However, please note that while there is a lot of buzz about “autonomous vehicles” or “self-driving cars”, every connected vehicle may be neither “autonomous” nor “self-driving”. The Society of Automotive Engineers (SAE) defined 5 levels of autonomy as shown below (SAE J3016) and connected vehicles may fall into any one of these (or neither).
Human driver is fully responsible for driving vehicle.
Computers assist in steering, acceleration, deceleration
Computers have primary responsibility for steering, acceleration, deceleration while humans act as a backup
Vehicles drives itself but human driver will respond to request for intervention from the vehicle
Vehicle can drive by itself even if human does not respond to request for intervention
Full autonomous vehicle, can do everything that humans can do under all conditions
Who are the Stakeholders?
There is a perception that vehicle manufacturers are the primary stakeholders in connected vehicles. While they are the key stakeholder, there are many other parties who are involved directly or indirectly as part of the overall connected vehicle ecosystem.
Owners and Drivers – The owners and drivers of connected vehicles are directly involved as they have to understand how these vehicles work and ways to best utilize capabilities of connected vehicles. There is a good likelihood that you are already driving a car with some connectivity, intelligence and autonomy. There are more computers in all modern cars than we usually realize. Even if you are not driving a connected car, people around you on the road may be. You may be traveling and rent a car that is connected and you need to know how it works.
Transportation and Delivery – Many vehicle vendors already have connected vehicles/trucks on the road that are very much connected through telematics systems to measure efficiency of their transportation and delivery operations. A number of truck vendors are working on fully autonomous trucks for delivery with implications on jobs as well as improvement of productivity.
Taxi Business – The Taxi business has been using connected vehicles for quite some time but the new push is towards robo taxy where autonomous taxi service initiatives are under way. Well-known companies like Uber, Lyft, GM and others are working furiously towards this goal. There are many smaller and less-known startup companies in the race as well.
Critical Infrastructure Protection – People responsible for protecting critical infrastructure must be working on initiatives about how to deal with autonomous and semi-autonomous vehicles.
Food Delivery – Food delivery business is growing and there are a number of companies in the race. If you work in any food business, you should be thinking about how to utilize connected vehicles to improve your business, order taking, delivery scheduling and so on.
Smart Cities and Local Governments – Local governments are fully involved in providing the core infrastructure on which connected vehicles can better operate. This includes, but not limited to, initiatives like intelligent street signs, traffic lights, road sensors to provide data about road conditions, traffic congestions and so on. A compromise of integrity of data can wreak havoc. Information security professionals for city governments and smart city projects should actively work on creating a reliable and secure infrastructure for connected vehicles to be safe.
Internet Service Provider (ISP) – The ISPs are responsible for providing reliable connectivity to vehicles inside and outside of the cities. The demand for data consumed by connected vehicles is increasing, especially when it comes to full connectivity.
Gas Stations and Convenient Stores – Modern autonomous vehicles may show up on a gas station or convenient store without a drive for gas or pick up groceries. Companies in these business should be prepared to provide secure and reliable services to these vehicles. At this point, you have to treat vehicles as people as far as the services are concerned.
Insurance and Underwriting – There are new risk factors (or less risk in some cases) arising from connectivity and autonomy of vehicles. The insurance companies need to factor in the type of connectivity and autonomy in their risk models to come up with reasonable insurance costs. Hacking is a real threat for connected vehicles with strong implications for insurance industry.
Information Security – As alluded to some areas above, the information security professionals have added responsibilities when it comes to this emerging field. First of all, they need to better understand connected vehicles, use cases, threat vectors, and overall risk scenarios. Secondly they need to add vehicle security as part of their policies and procedures. Third, vehicle security, where it makes sense, should become part of security operations centers (SOC).
Federal Government – The federal government agencies must ensure that as autonomous vehicles come on roads, their software features and redundancy of control systems must follow high standards for the safety of other people on the road. A malfunctioning autonomous or semi autonomous vehicle can create significant damage on a road.
The above list is just a sample of who is a stakeholder in the connected vehicle ecosystem. This technology is changing the modern life and has a potential to change even more in coming years as we move to higher levels of autonomy. It is incumbent on the information security professionals to better understand the existing and upcoming technologies to be effective in their role and better enable their businesses.
Budget estimates are a major part of building SOC business case. A typical budget will consist of the following three major components:
Capital Cost– This consists of initial expense of building SOC and includes everything from furniture to hardware, software and external consulting fees.
Annual Payroll Cost– This includes salary and benefits for people running the SOC. Depending upon location and the size and scope of SOC, this can vary significantly. However, this is a major part of annual cost.
Annual Recurring Costs– These costs include annual licensing fees, equipment depreciation, skills training, threat intelligence feeds, and general IT cost.
While estimating these costs, think about major cost buckets and get cost estimates from multiple vendors. For example, you may want to consider cost from multiple SIEM vendors by providing them high level requirements. Similarly, you can estimate number of IP addresses for subscription to network vulnerability scanning and application vulnerability assessment.
Estimating Number of People
The estimate for number of people may vary significantly depending upon whether you want to run a 24x7x365 SOC or something less than that. Following is one way of estimating number of people for 24x7x365 SOC.
Consider three shifts of 8 hours each. Also, consider 3 analysts in first shift and 2 analysts for each of the other two shifts. This will make 7 analysts on daily basis with 8 hours each, resulting in a total 56 hours every day. For the whole year (365 days), this will require 20,440 hours. Let us make it an even number of 20,000. Typically, one person will work for 2000 hours on annual basis, at the most. This means you need 10 analysts to run the SOC. You can divide these analysts into Tier 1, Tier 2 and Tier3. In my example, I estimate 5 tier 1 analysts, 3 tier2 analysts and 2 tier 1 analysts.
In addition to analysts, you will also need specialists like forensics and malware experts and a SOC manager.
If the SOC is not 24×7, your estimates will change accordingly. Based upon number of shifts, you have to create a schedule for these analysts and plan for vacation, training, and other situations. Typically, SOC manager will perform these duties.
We will have a separate blog posts about roles and responsibilities of each person and scheduling.
Estimating Technology Cost
As for as technology cost is concerned, you can explore options for Software as a Service (SaaS), purchasing perpetual licenses, or licenses with an annual cost. Vendors provide a number of options. You should keep about 20% of the software cost as annual maintenance fee but vendors can give you these numbers.
For initial SOC implementation, you will need external professional services. Vendors with expertise in building and running SOC can provide initial installation and tuning help to get the SOC up and running.
Build or Outsource?
For a comparison, you should also consider option for outsourcing the SOC. There are many vendors who provide “SOC as a Service” and bring their expertise to your benefit. Some vendors can co-manage SOC with your team, reducing the overall cost. You should explore all options as SOC is a major undertaking and needs significant planning.
Enable email subscription to this blog by entering your email address below:
Logs provide a wealth of information and that is one of the reasons that almost all security standards and frameworks (NIST, ISO, PCI, and others) emphasize on collection, storage, and analysis of log data as one of the key aspects of any security program. Collecting and managing logs is a fundamental requirement of any SOC implementation and is needed to meet many compliance needs.
However, as we know, some log sources provide much more value to security programs compared to others. So while you can collect, store and process all data you want, thinking about the true value can help you create a more cost-effective and focused strategy.
A phased approach for log management is always prudent where you start with important, more valuable log sources first and then add additional log data as your program matures.
While traditional log collection using Syslog protocols and log files has worked for quite some time, newer technologies are bringing challenges to log collection using older methods. With fast transition to Cloud based technologies, newer log data may come from SaaS applications, Cloud application platform, server-less applications, IoT devices, operational technologies, connected vehicles, drones, smart city technologies, and many others. These new log sources don’t always send logs with Syslog and may utilize APIs, web services, or Cloud services specially built for logging. While planning for collecting log data and building a log collection platform, all of these new options must be considered.
Distributed Log Collection
A distributed log collection architecture where local log collectors receive logs from different log sources and then forward to one or more central locations is commonly used today. This architecture helps in providing resiliency and reduction of loss of data in case communication link to central log collection becomes unavailable. The following diagram shows one such arrangement.
Welcome to brave new world of log collection using many methods to collect logs from Cloud, IoT, Vehicles, Drones, Operations Technologies, and others. Standing up a Syslog server is no longer sufficient.
A more distributed architecture can both collect as well as indexlog data locally and then make the indexes available to search requests from SOC analysts. This may be necessary to meet certain privacy needs like GDPR. However, one need to consider of the flexibility and scalability of distributed log collection infrastructure with the cost of managing it. As an example, indexing logs close to edge is attractive but it can create additional overhead in terms of correlation, reporting, alerting as well as cost of managing indexes at multiple locations. Needless to say that like everything else in life, there are some compromises to be made here as well!
Logging and NTP Protocols
A timestamp is an essential part of each log event. An important factor in building logging infrastructure is to ensure time synchronizing among all log sources to keep proper order of logs. Network Time Protocol (NTP) is commonly used for purpose. While NTP is a topic in itself, it is sufficient to at this point to understand that no logging infrastructure is complete until NTP is implemented to support it. Without it, log correlation and analytics will not work properly.
Lastly, building logging standards to identify type, amount, and level of logging also goes a long way to build a consistent approach throughout an organization. A logging standard must address requirements for logging at different levels including system, middleware, and applications. The logging standards should also specify accepted logging protocols, storages and lifecycle of log data. Logging standards must be updated at least on annual basis to ensure new sources and types of important logs are taken into consideration based upon their value.
While building a scalable and distributed logging infrastructure, one should consider the following:
Use of local log collectors that could help in reliability, buffering, compression and bandwidth saving
Understanding that modern log collection needs support of diverse log collection mechanisms that include Syslog, APIs, IoT protocols like MQTT, plain text files, XML, binary logs and others
Prioritize logs sources based upon their contributed value towards better risk management and threat detection or response
Use NTP in conjunction with the overall logging infrastructure to ensure proper order and correlation of logs
Build logging standards to bring consistency and clarity of logging requirements
By taking into account the above factors, there is a much better probability that you will be able to build a better logging infrastructure that grows with your needs, reduces cost, is more efficient and resilient, and brings more value towards managing risk.
Continuous learning and skills development is an essential part of any Cybersecurity professional but they don’t get enough time during normal work week. So why not turn Saturdays into a collaborative learning events where people come to share knowledge, teach, and learn on select topics related to Cybersecurity? My new initiative is launching “Cybersecurity Learning Saturday” which is summarized using the following few points:
Make Saturday a learning and skills development event as well as help you earn CPE credits to meet requirements for various certifications
Pick specific topics for day-long training sessions that will run in parallel
Bring expert volunteer trainers with expertise in these areas, who has a passion for sharing their knowledge.
Follow a specific standard training template for each session for consistency
Open the event for general community to attend where each learner picks up one of the topics, registers for the session, and gets a certificate of attendance at the end for CPE credits
With these objectives in mind, Cybersecurity Learning Saturday will become a learning event where professionals can pick a topic of their interest and join a day-long training session to upgrade their knowledge and skills. The proposed topics include but not limited to security certifications, Cloud security, security of DevOps, SOC, different types of security assessments including network and application security, and secure coding for web application developers.
The first Cybersecurity Learning Saturday will be held on March 2nd, 2019 in Columbus OH. I hope to see you in this event! Registration will start soon.
P.S. If you have passion for sharing your expertise and be a trainer for one of these sessions, don’t hesitate to contact me!
While doing research on my upcoming book about running a successful Security Operations Center (SOC), I have interviewed people who have built and run SOC as well as survey reports from organizations like SANS and others. Overall it is a sorrow state of affairs where almost half of the organizations have no metrics for measuring the success of SOC implementations. The ones who have a metric, are mostly using non-business focused measurements to gauge performance of the SOC. Some are using metrics just to justify a particular technology investment.
Most of the people are not focusing on automation (doing manual work).
There is a lot of work that needs to be done to make a SOC efficient and real metrics to demonstrate business value!
While defining SOC mission and goals are key starting points, defining SOC scope is crucial to manage the overall SOC project and break a large multi-year project into smaller phases and milestones. This also helps in managing cost and simplify implementation. My suggestion is to divide a SOC project over multiple phases, each of which should be about six months long. Following are some key areas to consider when defining the scope of each phase.
Log sources vary widely starting from security device logs, network components, applications, devices and many others. Collecting logs also needs significant investment in log storage and processing infrastructure. You want to prioritize log sources that bring the most value from security monitoring perspective. For these reasons, you should start with a small subset of logs and expand the scope of log collection over time (in future phases of the project). While making the initial log collection, you can consider the following:
Value of logs for identifying security events (proactive)
How a particular log source can help in incident investigations (reactive)
Amount of log data that you can handle
Compliance needs and requirements
Typically, you should start with logs coming from security devices (firewalls, IDS, content filtering and proxy servers, identity management systems, etc. The second preference may be operating systems and public facing web server logs. Then you can move to applications, and so on. There is no prescribed order and you should define your own scope based upon your particular situation and which systems play a key role inside your organization.
You can also use threat modeling techniques to identify critical log sources and prioritize these accordingly.
Time of Day
Although we want 24×7 SOC but that is not always possible due to different constraints. An 8×5 (8 AM to 5PM) or single-shift SOC may be a great starting point for many organizations, at least in the initial phases of SOC implementation. Once the initial phase is complete, you may want to add a second shift before going to full 24×7 implementation. Global organizations may also start with a single SOC and then use follow-the-sun model to achieve 24×7 coverage.
Large organizations have multiple business units and all of these units don’t need to be under SOC scope, or at least not in the first phase. While each organization may have a different criteria to identify which business unites to cover, some considerations may include:
Criticality of a business unit for the organization
Type of data
Compliance needs and local rules/regulations
Selection of business units may also be phased approach.
Multinational organizations may decide SOC scope based upon preference of specific geographic locations, among other criteria.
Fast emergence of new technologies including Internet of Things (IoT), blockchains, autonomous vehicles, drones (and others) is also impacting security business. While this may not be the case for some, others may deem these technologies as business critical based upon their impact. Following are some technologies that you may want to cover in different phases of a SOC project.
Machine Learning (ML), deep learning and other artificial intelligence related technologies
Internet of Things or IoT, collecting data from IoT devices and managing threats from IoT botnets, identities and other aspects of IoT
Operational Technologies or OT that cover factories, industrial controls, SCADA systems
Your business is potentially already provider or consumer of at least some of these technologies. You may also be interested in bringing these under SOC scope because you may be a service provider. In any case, threats to these and other emerging technologies are only going to grow as their deployment and use grows.
A solid definition of SOC scope is key to build not only for the business case but also a successful SOC. While the above list includes key considerations for defining SOC scope and build implementation phases, there may be other aspects that you may want to consider depending upon your particular business situations.
This article is part of my “Business Case Development” of my upcoming book about planning and building a successful SOC.
Like last year, ransomware continues to be a major issue for many organizations. One of the best things any organization can do to itself is to prepare for dealing with ransomware incidents. While ransomware is morphing into crypto currency mining in some cases, this is not the only major concern on security professionals’ mind as new technologies are emerging fast. From autonomous vehicles, blockchain, to drones to connected medical devices, security professionals are called to provide guidance/advice, frameworks, monitoring and incident handling to enable the business with these and many other technologies. All of this is making skills development a continuous and major challenge.
While other professions in technology has to focus primarily in their particular domain, the security professionals are expected to know it all.
Given these changes and challenges in the overall technology field, I have updated the CISO MindMap for 2018 which is the 10thversion since its initial publication. Major changes are highlighted in red color so that users of version 9 (2017) can easily see the updates and adapt.
Like last year, I would recommend focusing on learning the emerging technologies (augmented reality, blockchain, machine/deep learning, computer vision, autonomous vehicles and others). I can’t emphasize enough how important it is to enable your business with emerging technologies instead of standing in the way of progress. InfoSec professionals should not only be learning these technologies but should also be creating guidelines for using these technologies (proactively). You should be thinking about how to get logs and other data to identify threats, integrate with SOC, and deal with incidents. Many freeoptions for learning new skills are available form MOOC providers like Coursera and Edx.
Automation and Productivity
As the workload for security operations professionals is ever increasing, I would also emphasize to focus on automation and increasing productivity. New options are available to perform automatic threat hunting, anomaly detection, prioritization and others. Use of open source technologies and scripting should be an essential part of security operations. I would suggest having at least one person on your teams with excellent Python or other scripting language skills.
GDPR, Data and Privacy
Compliance with GDPR (General Data Protection Regulation) and data privacy is just a start and we can expect that more regulations like that will follow. Knowing what data is being collected, where it is stored, how it is utilized and secured are some of the key issues to understand for compliance with privacy regulations. The security professionals should be proactively training and guiding IT teams about data privacy, integrate with DevOps processes, and be an agent of change about how data is handled. At the same time, we need to be mindful that data is the new currency for our businesses and must be capitalized on and used as competitive edge.
Last, I want to thank all who have provided feedback and suggestions about how to improve the MindMap. The names are so many that I can’t include all of you but you know who you are. I wanted to let you know that your suggestions are very welcomed and much appreciated. Enjoy the new MindMap and don’t forget to send me a note about how it is helping you in advance your goals and objectives!
May 21, 2018.
Your feedback is very important to me. Please share your thoughts on my Twitter handle at @rafeeq_rehman. Also please subscribe to my blog using your email address at the top-right corner “subscribe” option.
Defining scope for the SOC is crucial for its success and to determine stakeholders for the SOC. The scope will help determine cost, associates needed to run the SOC, SOC processes and many other areas as listed below:
Coverage – Decide which areas fall under scope of the SOC (IT, OT, IoT, Physical Security, Cloud Service Providers. Others).
Incident Handling – Demarcation of where incident response will be handed over to other IT/OT/Physical security teams and which parts will be covered by the SOC staff. This will also help in determination of who needs access to incident management application.
Incident Handling Support – Which part of incidents will be outsourced to third parties, if any. For example, if the SOC does not include building in-depth forensic capability, it can be outsourced to a third party for major incidents.
Managing SOC IT Infrastructure – SOC team manages security applications including SIEM and security tools. However, IT infrastructure is needed to run these applications and tools. Decide who will manage network, storage, server Operating System for SOC IT infrastructure.
Governance – What is the governance structure and what other teams are involved. Especially who approves processes for incident handling when people outside SOC are involved.
Connection with Outside Parties – When outside parties like press, communication, law enforcement are engaged, who will establish relationships with these outside parties.
Data Collection Scope – What is the scope of data collections including logs, netflows, threat intelligence, physical security and others. What is in scope and what it not included in the data collection. If Cloud environment is in the scope, what data can be collected from the Cloud Service Providers (CSP)?
Vulnerability Management – Who manages critical vulnerabilities, from scanning to prioritization to patching.
Threat Intelligence Gathering and Use – How threat intelligence will be gathered and utilized (internal or outsources/purchased).
Processes – Define which processes will be part of SOC and which will be excluded. For example, is SOC responsible of education and awareness, pen testing, or patching? Depending upon organizational structure, these and other security operational processes may be part of SOC or outside of its scope.
Single or Multi Site – Large organizations may have more than one SOC. In case of multiple SOC situation, define geographical or organizational scope for each SOC. Also define collaboration mechanism and resource sharing among multiple SOC environments.
Compliance – What role SOC has in achieving and maintaining compliance with government and/or industry regulations.
This seems quite a lot of work but defining the scope is crucial part of a successful SOC foundation. Writing down the scope document and getting buy in from stakeholders will go a long way to avoid problems during SOC implementation and operations phases.
Data-driven business innovation is not something of distant future anymore. It is a reality of today. Many businesses are already reaping benefits of monetizing internal data that they already possess. Some are taking data-driven business innovation to the next levels by mashing up internal data with public data sources like social media feeds, weather data, and real time traffic information. Whereas others are working on generating new data from sources which were not possible in the past. For example, sensors and affordable wireless data communications is enabling gathering data from vehicles, agriculture, manufacturing, equipment utilization, and other. So what is fueling this revolution and why now? Following are few main reasons why this is happening and why you should give it a serious consideration.
Cost of Storing Data – Cost of storing enormous amounts of data has decreased to a level where it is almost insignificant. Quite contrary to the old days when capital investment was needed to build storage infrastructure, now almost unlimited and on-demand data storage is available from many Cloud services providers.
Availability of Analytics Tools – Data analytics tools, both commercial and in the open source, are available to process very large amounts of data at extremely low cost. Hadoop based technologies, Cloud services, and Machine Learning are fueling development of new tools.
Use of Unstructured Data – Older technologies for data storage and analytics were mostly based upon structured data. However, machine learning and AI advancements have made it possible to use unstructured data for business purposes. Now it is possible to monetize notes from customer service representatives, IVR, and unstructured public data sources.
Visualization – Data visualization is key to effective data-driven decision making. Now these tools are available as a service, enabling creating powerful visualizations and dashboards very quickly and without purchasing expensive tools.
Wireless Communications – Very affordable wireless data communication is enabling collecting data from mobile sources and remote locations that was not possible just few years back.
How businesses can monetize vast amounts to data and create data-driven strategy for business innovation? The answer is a little different depending upon type of business and the industry segment. Following are some of the ideas that you can think about as a starting point.
Customer Insights – A better understanding of customers and getting insights into customer behavior is every business’ dream. Data is enabling businesses gain customer insights for better customer services and building innovative brands. This is especially interesting for B2C interactions in financial, insurance, retail and other industries.
Product Improvement – Many manufacturers are using data to improve products, identify product defects, understand how products are being used, and in many other ways.
New Business Models – Many companies are using data to create new revenue streams at different levels. Some companies are simply getting into the business of selling data while others are offering data analytics as a service. Equipment manufacturers are working on providing proactive maintenance in addition to machinery, all with the help of data gathered through different sensors.
New Levels of Efficiency and Process Improvement – Data is fueling gaining new levels of efficiency in business processes, manufacturing processes, and even in service industries.
The bottom line is that it is imperative for every business to understand the data assets they possess, understand the data value chain, and initiate a data-driven business transformation strategy.