Access Granted
HUB CITY MEDIA EMPLOYEE BLOG
Avoid Disaster with a Multi-Data Center System
In the event a company’s system goes down, a Disaster Recovery plan should be in place to ensure productivity isn’t heavily impacted...
Is Your Company Prepared For A Data Disaster?
Disaster Recovery (DR) is a term that all heavily computer-dependent companies should be familiar with. It involves policies and procedures put in place to recover vital technology systems from natural or human-induced disasters.
In the event a company’s system goes down, a Disaster Recovery plan should be in place to ensure productivity isn’t heavily impacted. Such recovery strategies can include clustering, using load balancers and utilizing Multi-Data Center (MDC) Architecture.
Imagine a system with:
only one data center containing many application servers in a cluster
multiple database instances in a Real Application Cluster (RAC)
many web servers behind a load balancer
If the entire database goes down, that single point of failure will render all of those components useless. This is where DR planning pays off. Having well-implemented MDC Architecture ensures a company will have a DR plan for this instance, and productivity will not go down with the database.
The MDC approach can be used to distribute load between multiple data centers, as well as prevent an outage if an entire data center goes down. A data center acts as a single system with its own policies, data stores, applications, clusters and load balancers. There are three MDC topologies that can be utilized: Active-Active, Active Standby-Passive and Active-Hot Standby.
In the Active-Active topology, each data center would be in a different location while a global load balancer in front of the MDC system would send users to one data center. The global load balancer can determine which data center to send users to by using either geolocation or the application they are trying to access. The data centers can be configured to allow data to be transferred among them. Applications can make use of data centers with this configuration because data will be the same. If a data center goes down or traffic spikes, a user can be seamlessly transferred to the next available data center by the global load balancer.
In the Active Standby-Passive topology, one of the data centers remains passive or shutdown while the primary data center is up and active. If there is a problem with the primary data center, the passive data center will be brought up to replace it until the problem is resolved.
The Active-Hot Standby topology is similar, with the difference being that all data centers remain up; however, only the primary data center will be used until it fails. While the primary data center is catering to users, the standby data centers will still be reading data from the primary to properly fill in should it become unavailable.
To keep data centers identical, manual replication would have to be performed, which can be a time-consuming task. It can consist of manually exporting policies from one data center and manually importing them to another. Oracle Access Manager (OAM) has a feature for MDC called Automated Policy Synchronization (APS) that makes maintaining those data centers less intimidating. In an OAM MDC system, a single data center will be configured as the “master,” while others will be configured as “clones.” APS is an automated replication mechanism that pulls changes from the master and replicates them to each of the clones. For a clone to be involved in APS, a replication agreement must be created between it and the master. This functionality was added in OAM 11.1.2.2 and reduces the time needed to update all of the clone data centers by removing the need for administrators to update them manually.
For large enterprises that rely on their OAM systems in order to complete their work, the MDC approach will be the way to go when thinking about disaster recovery. With it, there will always be a backup data center available should one of them go down.
PROFESSIONAL SERVICES - SENIOR SYSTEMS ENGINEER
MFA: The New Security Standard
Multi-factor Authentication (MFA) is a security solution designed to combat the ever growing threat of cybercriminals, providing a second authentication layer to the standard username and password authentication...
As The Techniques Of Cybercriminals Advance, So Should Your Security.
Multi-factor Authentication (MFA) is a security solution designed to combat the ever growing threat of cybercriminals, providing a second authentication layer to the standard username and password authentication. MFA can have many forms, typically falling into two categories: Device Security and Security Tokens. MFA Device Security uses a device, such as a fingerprint reader, or requires a specific machine used in order to authenticate. In contrast, MFA Security Tokens require users to enter a secret token, known as a random one-time password (OTP), sent to them via text message or email in order to authenticate. Many online banks are already using MFA Security Tokens as a security solution. The benefits of using one form over another are usually quite minor.
As the threat to our online assets grows, more and more organizations are implementing MFA as an additional layer of security to protect customers’ information online. Cybercriminals such as terrorist organizations, independent criminal actors, even nation-states, all seek to exploit technological vulnerabilities to gain access to our sensitive data. Financial institutions and financial online services are a major target for these cybercriminals.
Cybercriminals can cause substantial financial losses to individuals they steal from. On September 13, 2016, New York announced the first-in-the-nation comprehensive cybersecurity regulations which mandate minimum security standards for thousands of institutions. MFA is one of these required standards. In addition to almost all major banks, technology organizations, such as Google and Facebook, are also currently utilizing MFA. With new laws like the cybersecurity mandate being created, it won’t be long until MFA is considered common practice for any online service handling sensitive data.
Integrating MFA into current systems isn’t as difficult as one might think. Developers need only to integrate an existing MFA technology into their systems, rather than develop one themselves. Developing MFA is the hardest part, but luckily, security experts have done the hard work for us and made it easily obtainable online through various companies. These companies (e.g., Authy) provide developers with the needed code (or APIs) for integrating a MFA solution. These APIs are provided in several coding languages and designed to be easy to use. No special security know-how is required and can be easily deployed into your environment in a non-intrusive way.
Using a 3rd party company to implement MFA doesn’t mean you have to trust them. Because these companies only have access to the MFA tokens, they do not have access to any other private information upon implementation. They only have access to one of the two layers of authentication required for MFA. If a MFA provider ever became compromised, accounts would still be safe, as username and password is still required in addition to the token.
Take a look inside Multi-Factor for DB, providing Multi-factor Authentication for IDCS and on-premise Oracle Databases!
Even though integrating the MFA code into Oracle Middleware may be simple, adjusting configurations inside Oracle products can be perplexing for the average developer. Here at Hub City Media (HCM), we have developed a way to integrate Multi-factor Authentication into Oracle Middleware Homes quickly and easily. Not only has HCM written integration code for Oracle Fusion Middleware products, we have integrated MFA solutions! Integrating MFA into your Oracle Middleware Home can be a simple and quick solution to protecting your sensitive data.
MFA is the new security standard and investing in this type of security solution is a simple and recommended investment. It may seem like something that can be put off until tomorrow; however, security of private information is not something to wait to protect. The best time for implementation is now. Every day your current system’s vulnerability is increasing, as technology and cybercriminals advance.
Benjamin Franklin once said, “Never leave that till tomorrow which you can do today.” Take action now!
If you have an Oracle environment that needs a Multi-factor Authentication security solution, contact us for a free consultation.
PROFESSIONAL SERVICES - LEAD SYSTEMS ENGINEER
SAML Federation Single Sign-On
Federation Single Sign-on (SSO) is a very popular means of providing SSO among internet applications...
Why Is It Good For Business?
Federation Single Sign-on (SSO) is a very popular means of providing SSO among internet applications. There are few specifications that provide SSO across the internet. What exactly is Security Assertion Markup Language (SAML) Federation? Why is it good for business?
SAML Federation works on the basis of establishing trust between entities to form a federation. A federation is a group of organizations which share information, but are internally independent. Essentially, once two entities decide to form a federation, they exchange information to identify each other. With SAML, each entity exchanges a metadata file representing basic information about the entity. An entity is either an Identity Provider (IdP) or a Service Provider (SP). The IdP provides information about the user and SP provides service to the user.
This is great for business, as SAML provides flexibility of who will be able to access your service or user information. It requires both parties to be aware of each other through use of the metadata file, with each party understanding who is providing service and who will be providing user identity information. The IdP needs to know what additional user information needs to be passed to the SP.
To create the federation, metadata files must be exchanged. Then, either side can initiate the SSO event. Depending on whether the use is already authenticated with the IdP, it will be sent to the SP’s application or be prompted for authentication before access to the application is granted.
As soon as the IdP determines the user is authenticated, it sends the necessary user information in a SAML Assertion. The SAML Assertion alerts the SP that a user has been authenticated, initiating a search for a matching user so access to the application can be granted. If a matching user is found, they receive access. If not, the SP has the options of either creating the non-existing user or rejecting the authentication.
The SAML assertion is sent as POST data through the end user's browser, so there is no direct connection from the IdP to the SP. There are options to encrypt the data within the assertion to prevent any browser-side snooping of information. During the Metadata swap, there is the option to provide encryption certificates, including the certificate’s public key to encrypt data. The server receiving data can only decrypt it with the private key of the certificate. There is an extra layer of protection on top of the TLS protocol that is used to protect the traffic.
The Beauty of SAML Federation
Once you are part of a federation, you can take advantage of services that your partners are federated with. Essentially you can “daisy chain” providers within the federation.
In the above diagram, an employee of Company A (A) will authenticate through A’s website and access a service provided to Company B (B) from Company C (C). The employee will access C using his A credentials. C does not know what A is and vice versa. There is no agreement among the two. B collects the user information from A and then provides it to C. The access is dependent on B’s relationship with A and C. As far as A is concerned, B is providing the service that C has.
This is the brokered trust model, much like how a mortgage broker is the middleman between you and the bank. You trust when you go to a mortgage broker that they have a good relationship with the bank and your goal is to leverage that relationship to get a better deal. Company A is trusting Company B’s relationship with Company C.
SAML Federation is an amazing technology that makes user management across the internet easy. SAML Federation goes beyond just internet-based SSO, and allows systems across many different services to maintain the user data through SAML Assertions. Federation allows anyone to supplement their service with other service providers, meaning that you can provide a complete solution without owning and operating everything and provide quick and easy access to your clients.
Here at Hub City Media, I have had the opportunity to see many clients with varying implementations of federation, and I find that Oracle Access Manager's flexibility is quite amazing in this area. I expect SAML Federation to be with us for a long time.
PROFESSIONAL SERVICES - ARCHITECT
OASIS SAML Technical Overview - https://wiki.oasis-open.org/security/Saml2TechOverview
Damien Carru's Blog: It's a Federated World - https://blogs.oracle.com/dcarru/entry/federation_proxy_in_oif_idp
IAM Systems And Successful Business
Building a successful Identity and Access Management program isn’t just about having a feature-rich IAM product...
How Do You Leverage Your IAM System To Improve Your Organization's Security?
Building a successful Identity and Access Management program isn’t just about having a feature-rich IAM product. A feature-rich product will aid in automating the provisioning and deprovisioning of applications, but it may not necessarily improve the security posture of an organization.
To improve security and raise awareness, it is crucial to form an IAM governance team responsible for enforcing policies and procedures. Awareness can be raised inside-out through security, business and compliance managers. Support of these personnel is crucial, as they have the necessary avenues already in place to influence users in the organization.
An IAM program relies on the following factors to ensure durability to ever-changing business needs:
Product Selection
Governance Team
End-User Support
Product Selection
Assessing a product is critical to ensure longevity of the IAM program, as the organization matures. The following factors should be assessed when selecting an IAM product:
Available Connectors
Scalability
Deployment tools
Feature set
These factors will dictate the type of team required to maintain and administer the IAM solution. Ease of deployment and integration will significantly increase the productivity of IAM engineers, as it will provide a flexible system that is able to support constantly changing business needs with minimal friction, providing engineers with an ability to quickly integrate applications while ensuring business enablement.
The feature set should also be assessed while keeping current and future business needs in mind. Almost all available vendors provide a feature-rich IAM product, which makes the product selection process difficult. To further narrow the selection, companion products such as authentication directory, role management solutions and governance products provided by the same vendor should be assessed. These could provide tighter integration across IAM components and ensure efficient interoperability.
Governance Team
A Governance team plays an integral part in maintaining the IAM program. The Program Manager oversees the activities of the program, defining policies and forecasting IAM needs to increase the maturity, while providing support to the business. The following processes, not limited to, should be considered for the IAM program:
Application discovery
Rectifying business pain points while interacting with IT systems
Enforcing security and compliance policies
Automating processes to reduce operating costs
These processes should be enforced by leveraging the features available in your IAM product. Implementing Governance features such as recertification is crucial to the organization to stay in compliance with regulatory mandates. In order to implement the governance features, an application discovery initiative is required to identify the critical applications within the organization. Leveraging these discovery findings, additional projects should be planned to automate account provisioning. This will significantly improve current operating procedures, while ensuring provisioning activities are audited and reported appropriately. The end goal for a Governance team is to have all business and mission critical applications fully automated and remediated by the IAM system.
To implement these processes, a team that is versatile and aware of the current organization processes is crucial to the success of the IAM program. These resources should identify gaps in existing processes and provide optimized solutions that can scale to the diverse landscape. Not having the proper resources involved will result in wasted time and cause initiatives/projects to either fail or take longer than expected.
Once in operational steady state, IAM program should invest in more advanced tools such as System Information and Event Management (SIEM) systems to provide context around security events and correlation of incidents with other systems. This will allow for processes to continuously monitor, assess and improve, thereby expanding the footprint of an IAM system within the organization.
End-User Support
No matter how the processes are optimized and automated to fulfill IAM needs, end-user participation is quintessential. End-users play an integral role in the success of the IAM program. Persistent channels of communication are required with the end-users to train and educate the IAM processes as the program matures. This practice will result in alleviated productivity for end-users.
End-user training will raise awareness of the IAM program, while ensuring a constant feedback loop that will aid in assessing the current state and optimizing the processes to achieve a higher degree of end-user loyalty. End-users should be treated as the partners rather than the users of the IAM system.
A mature IAM program will result in tools and processes available to the application owners and other business teams to collaboratively improve organization’s security posture. Improving organization’s security will not only reduce operating costs but also aid in building an IAM foundation that can sustain the growth of an organization.
For more information on Identity and Access Management, contact us today.
MANAGER - ARCHITECTURE
Migrating Passwords with Virtual Directories
LDAP Directories are the backbone of many enterprise IAM infrastructures, containing user data for authentication services...
When Does Password Migration Get Complicated?
LDAP Directories are the backbone of many enterprise IAM infrastructures, containing user data for authentication services. This user data contains sensitive information, such as passwords stored using various hashing methods. This often becomes a problem when migrating data from a legacy Directory Server to a newer Directory Server.
Migrating from a legacy Sun Directory Server to Oracle Directory Server (ODSEE) 11g or Oracle Unified Directory (OUD) 11g is fairly straightforward as far as user passwords go. They all use compatible hashing methods and, using replication, give a great deal of flexibility with data migration.
Password migration becomes more complicated when moving from another vendor Directory, such as eDirectory or Microsoft Active Directory. While the majority of user data can be extracted, sometimes with quite a bit of massaging to get it to conform to the schema, passwords may not be possible to migrate due to incompatible hashing methods. One solution is to migrate data without the password, requiring all users to change their password after ‘go live’ with the new Directory Server. This is not always an acceptable solution to the business, but a ‘seamless’ solution must be found.
Another more attractive solution is using an LDAP Proxy Server, such as Oracle Virtual Directory (OVD) or OUD’s proxy server functionality. LDAP Proxy Servers, often referred to as Virtual Directories, can intercept LDAP requests and perform operations with custom Java plugins. These plugins can be used as a method to intercept bind requests and migrate passwords from one Directory to another.
The LDAP client performs its normal bind operation during authentication.
The proxy server intercepts the bind request, which contains the username and password. The plugin performs a search for that user, checking for a custom attribute that is defined (such as passwordMigrated). If the password has been migrated, a normal bind is performed, which attempts to authenticate the user.
If the password has not been migrated, the Proxy server will attempt to bind to the legacy LDAP server using the user’s credentials.
If the bind attempt is successful, the plugin has the correct password and writes that password into the new LDAP Server and updates the passwordMigrated attribute, returning a successful authentication to the client.
One caveat to keep in mind is performance. LDAP Directory Servers are designed to deliver high performance, especially in regards to authentication requests. Adding an additional layer to intercept requests does add in a bit of overhead. While this can be negligible and not apparent to an end user in a smaller environment, a larger environment with high throughput can experience some noticeable overhead. It is important to architect the Proxy Server layer accordingly. Proxy servers typically cannot process the same throughput as backend Directory Servers. This can be achieved by horizontally scaling the Proxy Server layer, if necessary.
This solution will migrate passwords with a ‘frictionless experience’ to the end user. This is generally an interim solution for a specified period of time to capture the majority of users until the legacy system is retired. Users that do not perform an authentication during this time will need to change their password after it is removed.
Hub City Media has implemented this method and many other virtual Directory solutions. Please contact us to schedule a discovery session.
PROFESSIONAL SERVICES - SENIOR ARCHITECT
password123456: Avoiding Poor Password Practice
While more applications are beginning to enforce stronger password policies, we can observe through recent password data that many users are still using unsafe passwords...
Are You Committing A Password Faux Pas?
It seems as if we’ve been hearing about the “end of passwords” every year for decades now, especially due to recent hacks splashed across the news. Innovations including Single Sign-on, Multi-factor Authorization, Biometrics and Google’s Trust API have been developed as “password killers” to rid us of the nuisance of remembering passwords. Password management systems, such as 1Password, have become a useful tool to store the increasing number of account passwords we own these days, although frankly, many end users are either unaware or unwilling to put in the extra time and effort to use them. In a 2015 survey of 1,000 consumers, only 8% used a password manager.
Despite limitations of passwords, the “end of passwords” is not on the horizon. Passwords are the cheapest and most versatile data security method to deploy. Many applications, including Identity Management solutions, utilize passwords for administrators and end users alike. Newer authentication methods either combine with passwords, such as RSA Tokens or Smart Cards, or externalize the process while still using a password outside the application, such as Kerberos or SAML Federation. Until we truly see the “end of passwords,” following good password practices will remain as the key defensive front line protecting users and organizations from security breaches.
Quality password practice is achieved by setting a strong password policy and communicating how to create secure passwords to end users. It is of the utmost importance to show the best way to create passwords without simply satisfying minimum requirements. In this vein, coupling a custom password policy with a notification, sent to users upon creating or updating their password, increases the policy’s effectiveness.
While more applications are beginning to enforce stronger password policies, we can observe through recent password data that many users are still using unsafe passwords. Even today, the two most common passwords are “password” and “123456”.
Poor user password choice was exemplified in one of the most infamous hacks in recent memory. The LinkedIn leak of 2012 provides an interesting window into what people consider “secure” for an account they do not want to fall into the wrong hands.
At the time of the security breach, LinkedIn had a lax password policy, allowing six-character passwords with no required complexity. Facebook’s Mark Zuckerburg was among the affected, as the hack exposed that not only was Zuck using the simple password “dadada” for LinkedIn, but he was also using that same password for his Pinterest and Twitter accounts. We also learned that more than one million LinkedIn users chose “123456” as their account password, which is still the second most popular password in use today. The simplicity alone of such passwords is disturbing to those in the security industry. Perhaps even more concerning is that these users never learned one of the more important lessons from the 30-year old comedy classic Spaceballs (the other being that everything that happens now, is happening now!).
When setting policy, password length must be considered, knowing each extra character adds more security. Best practices show that password length should, at a minimum, be between 12 and 15 characters. Unfortunately, remembering lengthy passwords can be difficult. Many people meet today’s common password policies by starting with an upper-case letter, followed by a string of lowercase letters, and ending with numbers and special characters.
Such patterns are well known to sophisticated password hackers, making passwords such as “Helloworld12!” nearly as strong as “123456.” The National Institute for Standards and Technology (NIST) has released new password guidelines, suggesting that we do away with composition rules and give the end user more freedom in password selection. Password length is more important than a shorter string of varying types of characters and should not have an artificial ceiling. Let the user create a password up to 64 characters if they so desire!
There are several ways to create strong, lengthy passwords. A “passphrase” can be taken from a favorite movie or book, or can be created using the first letters of the words of a catchphrase the user knows. For example, the phrase “The New York Mets will win the 2017 World Series!” becomes the password “TNYMwwt2017WS!”. Other methods include combining two weak passwords into one strong one, for example “Password123” and “DaBears” becomes “DaPassBearsWord123”, or doubling some or all of the letters: “Welcome1!” becomes “WweLlCcoMme11!!”. These “passphrases” also have the added benefit of being easier to remember than shorter, more complex passwords.
Using Mark Zuckerburg as an example again, it is important to remind users to never use the same password for more than one account, application or service. The tendency for users to reuse the same password is one of the primary ways hackers are able to compromise systems by simply using a hacked user login from another source. Just this past month, Spotify took the initiative to force their user base to reset their passwords as a preventative means of protection from the most recent data breaches outside Spotify and across the Internet.
Remember, passwords will never be a bulletproof security solution. When human error is involved, there will always be the opportunity for misuse. Until cheap and ubiquitous identity kevlar is created, following the password selection methods outlined here, as well as presenting these ideas to the user base, can provide a stronger defense.
Citations:
Henry, Alan. LifeHacker. “Five Best Password Managers.” January 11, 2015
http://lifehacker.com/5529133/five-best-password-managers
Rubenking, Neil J. PCMag. “Survey: Hardly Anybody Uses a Password Manager.” March 3, 2015
http://securitywatch.pcmag.com/security-software/332517-survey-hardly-anybody-uses-a-password-manager
Condliffe, Jamie. Gizmodo. “The 25 Most Popular Passwords of 2015: We're All Such Idiots.” January 19, 2016
http://gizmodo.com/the-25-most-popular-passwords-of-2015-were-all-such-id-1753591514
Hackett, Robert. Fortune. “Here Are the Most Common Passwords Found in the Hacked LinkedIn Data.” May 18, 2016
http://fortune.com/2016/05/18/linkedin-breach-passwords-most-common/
Geekologie. “Dadada, Really?: Mark Zuckerberg Gets Social Media Accounts Hacked, Password Leaked.” June 6, 2016
http://geekologie.com/2016/06/dadada-really-mark-zuckerberg-gets-socia.php
Brecht, Daniel. Infosec Institute. “Password Security: Complexity vs. Length.” December 8, 2015
http://resources.infosecinstitute.com/password-security-complexity-vs-length/
Wisniewski, Chester. Naked Security by Sophos. “NIST’s new password rules – what you need to know.” August 18, 2016
https://nakedsecurity.sophos.com/2016/08/18/nists-new-password-rules-what-you-need-to-know/
Cox, Joseph. Motherboard. “After Breaches At Other Services, Spotify Is Resetting Users' Passwords.” August 31, 2016
http://motherboard.vice.com/read/spotify-passwords-reset-security-precaution
Automation: Enhance Platform Deployments
The successful deployment of a critical application is crucial, and failure can have far-reaching consequences...
Human Error Can Lead To Larger Issues Down The Line. So How Do You Prevent It?
The process of application deployment can be a stressful one for a company’s computing systems, management and IT department. The successful deployment of a critical application is crucial, and failure can have far-reaching consequences. For an example, think back to the enormous technical snafu of the healthcare.gov launch - by some estimates, the government healthcare enrollment website was only able to enroll 1% of interested individuals in its first week (1). The importance of a successful deployment is amplified when considering cybersecurity infrastructure, such as identity and access management systems. Automated deployment can improve the speed, reliability and security of a “typical” manual deployment, and can significantly reduce the stress and foundational investment associated with this process.
Deployment is defined as ‘all of the activities that make a software system available for use’ (2). There can be great variability in how deployment is carried out, as both applications and customers have different characteristics and requirements; however, the general pattern consists of: installation, configuration and activation of new software, adaptation of existing systems to new software and any necessary updates. In production environments, the roles involved in this process generally include systems engineers, database administrators, network teams, IT stakeholders and project managers. Automation can reduce much of the complexity involved with deployment, and can realize improvements in speed, reliability and security.
Speed
Oftentimes, it takes a significant amount of time to deploy an application. Coordination of the roles involved may take longer than anticipated due to timezone differences, pre-existing obligations, lack of dedicated resources and other ‘human’ factors. Each role may possess a different part of the information required for successful deployment, such as a password or some configuration information, and preparation of the computing environment can drag on, wasting company time and money. Automation helps to alleviate these issues and can drastically cut down installation and configuration time. For example, many applications have configuration files that can be fed into the installer, and run immediately. Automation tools such as Ansible can feed these files into the installer, with all the information provided beforehand by those that possess it. Additionally, system configuration and installation management can be negotiated beforehand, provided to the automation tool, and with one click the entire deployment process can be kicked off and completed, without any need for manual intervention and all of the slowdowns associated with it.
Reliability
Let’s face it - we all make mistakes. Whether that means forgetting your wallet at home or accidentally ‘fat-fingering’ a configuration option, mistakes make life more difficult. In a business’ IT systems, mistakes can mean lost time, profits and opportunities. Automated deployment significantly reduces the chance of making a mistake by minimizing human error. Most automation tools have the user define their tasks as a series of steps, specified in a file. For example, Ansible has users define steps in a ‘playbook’ - an easily readable list of steps written in a programmatic format (3). As long as this file does not change, the steps involved and the changes made to the system will be identical each time the automation tool is run. This makes troubleshooting, auditing and tracking changes significantly easier.
Security
Generally, the less hands involved in deploying an application, the smaller the chance of a security breach within it. Passwords, protected system information and security keys are all exchanged between roles when installing and deploying software systems. This cross-talk introduces significant security holes, as confidential information often sits on email and chat servers, and maybe even on a piece of paper (hopefully not!). With automated deployment, one person can attain all of the necessary information and provide it to the automation agent, which usually has tools for encryption. Thus, automated deployment increases the security of a regular deployment by simplifying it.
At Hub City Media, we have used the automation tool Ansible to expedite the installation of identity and access management solutions. Our AutoInstaller products run on top of Ansible, and significantly cut down the installation time required to install products by up to 80%. Our clients get a robust, secure and easily replicable way to integrate software systems into their existing architecture, and a much less stressful deployment process. We also use Ansible to automate internal tasks, such as setting up machine instances and installing bootcamps. Automated deployment has added tremendous value to our internal and external processes, and we hope you too can use it to realize your personal and business goals.
(1) http://www.bloomberg.com/news/articles/2013-10-16/why-the-obamacare-website-was-destined-to-bomb
(2) https://en.wikipedia.org/wiki/Software_deployment
(3) http://docs.ansible.com/ansible/playbooks.html
Reducing IT Security Risks with Identity Management
Leveraging an identity management solution can mitigate IT security risks by eliminating orphan accounts, fixing poor password standards and...
Why An Identity Management System Is Essential To Your Organization
Identity-related security breaches are major concerns for organizations. Due to rapid technological growth, identity is no longer "just" a user account. ‘Identity’ can consist of many devices, roles and entitlements. With the influx of these additional entities associated with an identity, enterprises can become vulnerable when these complex structured identities are not properly administered. Leveraging an identity management solution can mitigate IT security risks by eliminating orphan accounts, fixing poor password standards and providing auditing services.
Orphaned accounts, accounts that still have access to systems without a valid owner, can introduce potential security holes to an enterprise. Without prompt and thorough de-provisioning of terminated employees, stagnant accounts can grant unauthorized access to sensitive systems and provide information to unauthorized users.
An Identity Management (IDM) system can:
Discover, continuously monitor and cleanse orphaned accounts from an organization
Reconcile accounts from various sources, such as databases, applications and directories, to find lingering orphaned accounts
Automate the deprovisioning process of orphaned accounts with well-defined workflows and policies, allowing for more consistent, coordinated and immediate removal, compared to a manual process, which is prone to mistakes
Poor password standards can put an organization at risk, as users with weak passwords are more susceptible to identity theft. In the worst case, an entire organization can be compromised if passwords of privileged accounts are exposed to intruders. As new applications are introduced to an organization, users often have numerous credentials, due to different password complexity rules between applications, and may result in users creating passwords that are not complex and easy to remember.
An IDM system can remedy these potential security risks. Password inconsistencies can be reduced by utilizing centralized password policies within an IDM system. In ForgeRock OpenIDM, for example, password policies can be scoped over groups of users. This allows for a tighter level of control on end-user authentication security, especially for high-risk groups who might need more frequent resets or more complex standards for password content and length.
Auditing user and group activity is essential for any organization, especially for meeting regulatory requirements. IDM systems can:
Centralize historical records, which can be crucial to debugging problems
Provide answers to questions such as “when was this account provisioned?” and “who approved the request?” in activity logs and database tables
Identify unusual or suspicious activity in real time
Oracle Identity Manager has functionality to define audit policies that detect and inform administrators of Segregation of Duty violations, constructing robust approval workflows to handle them.
IDM systems offer a multitude of benefits to an organization, not least of which is reducing critical security risks. Vendors such as Oracle and ForgeRock offer feature-rich and extensible IDM solutions that can complement existing environments with powerful governance tools. Consumers should decide which solution best meets their unique needs, bearing in mind that an IDM system is essential to the security and efficiency of an organization.
ForgeRock. “White Paper: OpenIDM.” July 2015, https://www.forgerock.com/app/uploads/2015/07/FR_WhitePaper-OpenIDM-Overview-Short-Letter.pdf
Guido, Rob. University Business. “Before the Breach: Leveraging Identity Management Technology to Proactively Address Security Issues.” February 2009, https://www.universitybusiness.com/article/breach-leveraging-identity-management-technology-proactively-address-security-issues
Lee, Spencer. Sans Institute. “An Introduction to Identity Management.” March 11, 2003,
https://www.sans.org/reading-room/whitepapers/authentication/introduction-identity-management-852
Lieberman, Philip. Identity Week. “Identity Management And Orphaned User Accounts.” January 30, 2013,
https://www.identityweek.com/identity-management-and-orphaned-user-accounts/
Oracle. “Oracle Identity Manager - Business Overview.” March 2013,
http://www.oracle.com/technetwork/middleware/id-mgmt/overview/oim-11gr2-business-wp-1928893.pdf
Prince, Brian. eWeek. “Old User Accounts Pose Current Security Risks for Enterprises.” May 5, 2008, http://www.eweek.com/c/a/Security/Old-User-Accounts-Pose-Current-Security-Risks-for-Enterprises
PWC, Inc. “How to use identity management to reduce the cost and complexity of Sarbanes-Oxley compliance*”. April 14, 2005
http://www.pwc.com/us/en/increasing-it-effectiveness/assets/howidmsupportscompliance.pdf
Minimizing Access Request Complexities - Maximizing User Experience
What exactly should an end-user see when requesting access? This is a common hurdle for teams when implementing an Identity solution…
Why Is A Simplified End-User Experience Beneficial To All?
You are an Administrative Assistant on the first day of your new job. Your manager sends you the link to your new company’s access request site and asks you to request everything you will need to perform your duties. You log into the Identity Management (IDM) system to make a request and are immediately alarmed at the number of options and fields available. Selecting any of these options brings you to a new page with the same number of options! You start to contemplate asking a colleague what they requested or even begin to submit a few generic requests -- if only you can find where to submit them!
What Exactly Should An End-User See When Requesting Access?
Unfortunately, this scenario happens to end-users more often than we’d like to admit. When considering a new Identity Management solution, or even reevaluating an existing solution for improvement, it’s important to keep this type of scenario in mind and set goals to reduce the complexities of access requests in the eyes of the end-user.
What exactly should an end-user see when requesting access? This is a common hurdle for teams when implementing an Identity solution. One overarching guideline for approaching this issue is to keep the interface simple. While this is not a new concept, it is often forgotten when attempting to provide a feature-rich solution. Remember, in general, less is more to an end-user.
Most end-users are not frequenting the Identity Management solution, so there is little opportunity to transfer knowledge with it remembered for subsequent sessions. Even if there is no direct impact to security, an implementer should consider restricting view-permissions on screens, resources or attributes to only the necessary groups. In addition, the end-user should be provided choices and direction over free-form requests in order to make the requests meaningful, the fulfillment of manual processes more efficient and the setup of automated processes possible. This may require translations of attribute values to help the end-user understand the requests they are creating.
What impact do target resources have on this process? When developing the interface for end-users, implementers must consider that the Identity Management solution is dependant upon target resources for defining necessary form field values. Often these inputs are similar to what is supplied from the trusted source and can be transferred to the target resource behind the scenes within the Identity Management solution. However, some of these inputs are specific to the resource and must be specified on account creation or update.
It may be possible to shift this responsibility away from the end-user by manipulating the target resource to default some of these values in certain situations. Target resource administrators may even be able to take this a step further by consolidating points of access control. For example, several application owners may choose to utilize a common repository to manage permissions allowing the Identity Management solution to interact with a single target system for all participating applications. Either approach may translate to the end-user as less to manage and remember.
What if end-users are still confused by what they should request? It is not uncommon for end-users to know what job they must fulfill and still not know what access is needed for that job. This is especially true for ‘Day One’ employees. At this point, Role Based Access Control (RBAC) may be considered to further simplify the request system. Following this approach, roles would be defined to identify specific duties of an employee within an organization.
Once defined, these roles can be mapped to all target resource permissions required to perform those duties. A user no longer has to request an individual piece of access from each target resource, only the role they need to fulfill. This makes the requests more intuitive by further automating the process and placing more of the technical attributes beyond the scope of end-user visibility. These benefits come at some cost, however. Significant effort may be required by a Business Analyst to initially define roles, approval workflows may become more complex and certifications may be necessary to maintain the roles (although that comes with additional benefits as well!).
Can we eliminate end-user requests altogether? In most cases, this is not feasible. However, the number of requests may be greatly reduced by further automating processes in the Identity Management solution. Information from the Identity Management solution’s trusted source may be able to identify a number of roles applicable to a user.
This starts to form the basis for Attribute Based Access Control (ABAC) and the idea of birthright resources. Attributes of a user profile, specifying anything from a user’s position to the entire active user base, can be mapped to a set of roles. From this point, provisioning is carried out similar to RBAC. This may, for example, further alleviate ‘Day One’ basic access requests for new and transferred employees. Roles that are provisioned via ABAC can be removed from the request system, reducing the choices available to end users, while a RBAC approach can be utilized in parallel for the remaining roles.
The methods described above aim to reduce what is available to end-users when requesting access. By doing so, end-users are less likely to request inappropriate access or have requests stalled in approval or manual provisioning workflows due to inaccurate request descriptions. It also lends itself to a better user experience by limiting the training required to make an employee effective at utilizing the IDM system.
As the system evolves and begins to build upon each of these methods, Identity Management solution administrators will begin to focus more heavily on certification and segregation of duty definitions to maintain the relationships among attributes, roles and target resources.
Confidentiality and Ethics: When Outside Consultants have Inside Access
With increasing security concerns in both consumer applications and large-scale enterprise deployments, it becomes even more critical as professional consultants to adhere to a code of ethics...
Ethics Through The Eyes Of An IAM Consultant
As Identity and Access Management (IAM) consultants, we spend a significant amount of time in differing client environments, often having access to databases, directories and applications containing very sensitive user data.
For example, we might be at a client site with full access to their Human Resources application. These applications contain very sensitive user information, including home addresses, social security numbers, salaries, etc. I can personally recall several instances where I was on a project with all of this data accessible.
With increasing security concerns in both consumer applications and large-scale enterprise deployments, it becomes even more critical as professional consultants to adhere to a code of ethics that maintains end-user privacy, preserves confidentiality and protects against information leaks.
A few things to keep in mind:
We have a responsibility to our clients and their user base to maintain privacy. User data, not just personally identifiable information, should always be respected. User data in Development and Quality Assurance environments is often directly copied from Production. This is a great security risk, as non-Production environments are often less secure and have greater levels of access within an organization, making them prone to misuse. Clients are advised to invest time in sanitizing data in these environments (e.g. scrambling SSNs or changing birth dates). With a bit of work, it is very much possible to maintain and mirror Production level data in Test environments.
While working on client projects, we have an obligation to keep information that we discover confidential. For instance, a consultant might have access to a client's IAM system and see a familiar employee in a 'Disabled' state with all access revoked. While it might be tempting to share this information with colleagues, it is highly unethical to do so. Often, we are asked to sign non-disclosure agreements; however, even if we are not, there is still a strong responsibility to keep private information private.
We also have an obligation to report when confidentiality might be at risk. For example, if you received an improperly distributed spreadsheet containing very sensitive information, such as employee salaries, you should quickly realize the error and immediately inform someone who is able to intercede before that information is leaked. If not, extremely sensitive data could be severely compromised.
Computer systems can be used to violate the privacy of others. As consultants, we have an obligation to maintain confidentiality. In the end, it’s about being professional and respecting the value of privacy.
For further reading about this topic, please refer to Software Engineering Code of Ethics and Professional Practice by the Association of Computing Machinery.
Hello IDCS!
Peter Barker, Oracle’s Senior Vice President for Identity Management and Security, recently penned a blogpost officially announcing Oracle’s new Identity Cloud Service (IDCS). Public details on IDCS with the complete set of functions and features are...
With The Rise Of Cloud Adoption, What Do Businesses Need To Know To Be Successful?
Explore our IDCS offerings and request a demo!
Peter Barker, Oracle’s Senior Vice President for Identity Management and Security, recently penned a blog post officially announcing Oracle’s new Identity Cloud Service (IDCS). Public details on IDCS with the complete set of functions and features are yet to be revealed; however, key elements of Peter’s post should not be missed. Peter describes IDCS as a system built with a “standards-first and API-first philosophy.” That’s a clear and welcomed shift from Oracle’s previous security product philosophy and indicates Oracle is paying attention to directions in which the market is moving.
Clients want security solutions that implement standards, allowing them to “wire together” product from different vendors. Clients are adopting cloud products and services from multiple vendors at an amazing rate. If IT has a chance of ensuring the safety and security of this activity, it will be through choosing corresponding cloud security products that implement a rich set of standard security protocols that are easy to deploy.
Clients also want APIs because, frankly, not every vendor can anticipate all integrations that might be critical to success. Clients don’t want just a SDK. They want standard REST APIs that can be easily consumed from different languages, platforms and developers of various skill sets. REST is the new SOA, and REST APIs have simplified B2B, B2C and B2A innovation. REST APIs allow companies, like Uber, to integrate ride sharing services into every mobile application. They have also allowed transit authorities, like BART of the New York MTA, to provide schedule data to application developers, thus crowdsourcing new mobile experiences. Security is no different. REST APIs are allowing Hub City Media to integrate security features everywhere within our organization and soon for our customers.
Clients want the cloud. Hub City Media has embraced the cloud in all aspects of IT infrastructure as early adopters; however, the market for cloud security is still maturing. In December 2015, Gregg Kriezman of Gartner estimated only 10% of web access management customers moved to the cloud (1). There is still much of the market left in deciding how to reap the benefits of cloud IAM, and Oracle is well positioned to capture a significant portion of that market.
Hub City Media has participated in the beta program for IDCS for several months now and is very excited to show what the product can do. More importantly, Hub City Media is intrigued with what we’ve been able to create with this innovative cloud solution. We’ll be updating you on our progress as Peter’s team reveals more product details. We think you’re going to like what you see from Oracle and Hub City Media!
For those headed to Oracle OpenWorld this year, reach out to us and let us know if you’d like a sneak peek. Contact myself or our sales team for a preview of our cloud complementary innovations!
“Market Guide for Web Access Management Software”, Gregg Kreizman, Gartner, ID: G00276092, 23 December 2016
CTO AND FOUNDER
EUS Enterprise Roles Developer Use Case
EUS simplifies and increases quality in processes for adding user accounts, managing credentials, eliminating orphaned accounts...
Oracle's Enterprise User Security feature continues to gain adopters who want to centralize account management across all Oracle databases in the enterprise. EUS simplifies and increases quality in processes for adding user accounts, managing credentials, eliminating orphaned accounts, and more.
Organizations who need more convincing may want to take a closer look at theEnterprise Role concept of EUS. Enterprise Roles can actually multiply the economic and security benefits that EUS brings, by introducing a framework for managing database privileges across applications and test levels.
The structure of the Enterprise Role is a pair of collections maintained in the Oracle directory:
The first collection (Grantees) is of the individuals and/or groups who have access to the Enterprise Role. The organization manages group membership in their enterprise LDAP directory, such as AD.
The second collection (DB Roles) contains the privileges, in the form of a list of distinct database roles. Each entry in the list contains the identification of a database and a specific role on that database. Therefore, a single enterprise role can span several databases, granting one or more distinct roles on each of those databases.
An interesting application of this has come up in a couple of our customer engagements. The use case involves several development teams who need access to their respective application schemas. Typically, developers hold full modify access (SELECT, INSERT, UPDATE, DELETE) to application data in lower test levels, but only SELECT access in user acceptance and production levels. Let's look at an example:
2 Application Schemas:
HUM_RSRCE
PRICING
5 Database instances spanning 4 Test Levels:
UNIT (both apps run on DB instance: UNITDB)
INTEGRATION (both apps run on DB instance: INTGDB)
USER_ACCEPT (both apps run on DB instance: UACCDB)
PRODUCTION (HUM_RSRCE runs on prod instance: HRPROD; PRICING runs on prod instance: PRPROD)
Database roles defined to manage schema privileges:
HR_READ -- select on HUM_RSRCE schema objects
HR_MODIFY -- select, insert, update, delete on HUM_RSRCE
PR_READ -- select on PRICING schema objects
PR_MODIFY -- select, insert, update, delete on PRICING
Identification of users requiring Developer privileges:
For HUM_RSRCE:
Members of the HR_DEV group in AD
Judy Stinch, the HR IT Liaison
For PRICING:
Members of the MKT_DEV group in AD
John Slough, the Marketing IT Liaison
Without EUS the work required to manage all these users and grants on each database, through repeated development cycles and organization changes, would be tedious and prone to error. Let's look at how EUS simplifies this. First, by implementing EUS-managed Shared Schemas, user account provisioning on the listed databases is no longer necessary.
Next we create EUS Enterprise Roles to manage the privileges against each application schema on each test level. First, the local roles on each database must be altered to designate them as global roles. Then, we create just two enterprise roles and their collections in the EUS directory:
HR_DEVELOPER
associates those needing Developer privileges against the HUM_RSRCE schema with the appropriate global role in the appropriate test level database
PR_DEVELOPER
associates those needing Developer privileges against the PRICING schema with the appropriate global role in the appropriate test level database
The diagram below provides the details of the HR_DEVELOPER enterprise role and its containers. The PR_DEVELOPER role would have a similar structure.
The power of this arrangement is obvious. A single object, existing in the directory, manages privileges for a class of users across a series of databases. And the privileges can vary from database to database.
Notice that the roles inside each database are the same from level to level. Because of this consistency, operations such as level promotion or test data refresh can execute without any change to the role or grant structure before or after. The Enterprise Role ensures that developer access is immediately available with privileges appropriate to the level.
In this example, we developed a database-level role structure that doesn't have to change across migration levels, then used EUS Enterprise roles to manage the differences in developer privileges across environments. The result is a configuration with:
higher, more stable security;
improved quality out of the development process; and
lower maintenance costs.
Hub City Media has the world's best team to help bring the benefits of EUS to your organization. Please contact us to schedule a discovery session.
SENIOR DBA
Four Tips for Integrating Your Identity Management System with Your Information Technology Service Management System
We have always had customers who wanted to integrate their identity management system to a custom user interface. Recently, we have been noticing...
We have always had customers who wanted to integrate their identity management system to a custom user interface. Recently, we have been noticing an increase in customers that want to integrate the identity management system (IDM) with an information technology service management system (ITSM). For those of you unfamiliar with the term, an ITSM is basically your trouble ticketing or IT request system. More organizations are using the ITSM as the central system for all user IT requests, such as for equipment and software. It’s part of a larger movement to attempt to centralize IT processes and measure the effectiveness of IT to provide those services. So if organizations are centralizing all IT requests, it seems only natural that they would want user requests for access to flow through the same system. This creates a “one stop shop” for all interactions between business users and IT.
Oracle Identity Manager (OIM) 11g R2 introduced a new request user interface that uses a more familiar metaphor, the shopping cart. System access is now something you search for in a catalog, add to your shopping cart and then “check out” to submit the request. This type of task-based UI is something users need little training to master because they use it all the time when they shop online; however, despite this tremendous leap forward in usability, some customers still want to move requests to the ITSM.
There is no out-of-the-box integration between OIM and any of the more popular ITSM systems. So this means a custom integration using the OIM API and the API of the ITSM is required. Here are four guidelines you should consider in your integration design:
Use the ITSM for requests only. While it may be tempting to hide the entire IDM system from end users, it’s unnecessary and will require you to re-engineer more than the request interface. Most users will understand that the ITSM is for service requests but things like password changes / resets happen elsewhere.
Keep access approvals in the IDM system. If your IDM system is like OIM, then it will be capable of supporting custom approval workflows. Use the IDM system for these approvals. Approvals may require the approver to do more than merely accept or reject the request. The approver may be asked to update fields in the request. Since this is something that is already happening on the IDM system, don’t reinvent the wheel. You also want your IDM system to be the single point of audit. This means that all data around the request should be collected and captured by the IDM system. If you have approvals occurring in the ITSM, you will need to pull data from the IDM and ITSM systems to get a complete picture for your auditors. By keeping the requests in the IDM system, you will simplify your ability to provide auditors with information.
Post status updates from the IDM to the ITSM. While users are going to be submitting access requests to the ITSM, they are also going to be checking on the status of those requests. It’s important to update the ITSM with the current status of the request from key points in the request workflow running in the IDM system.
Automatically synchronize the IDM catalog with the ITSM catalog. The catalog of requestable items in your IDM system are going to change constantly. You want to automate the synchronization of items from the IDM catalog into your ITSM catalog as much as possible. This is critical as you don’t want to duplicate configuration on your IDM and ITSM for every change to your access catalog.
This is by no means an exhaustive list, but it’s a good start. Your requirements are going to drive much of the specific design of your integration.
If you have any questions or comments, feel free to contact me. I’d like to hear how you are planning your ITSM / IDM integration. We’ve created several ITSM / IDM integrations for our customers and if you’re considering it, we can help.
Email: steve@hubcitymedia.com Twitter: @stevegio
CTO AND FOUNDER
FOLLOW US
FEATURED POSTS
SEARCH BY TAGS
SEARCH BY DATE
- September 2023 1
- June 2023 2
- May 2023 1
- January 2023 1
- November 2022 1
- July 2022 1
- May 2022 1
- April 2022 1
- December 2021 1
- September 2021 1
- April 2021 1
- February 2021 1
- January 2021 1
- July 2020 1
- February 2020 1
- January 2020 1
- December 2019 1
- July 2019 1
- May 2019 1
- January 2019 1
- December 2018 1
- November 2018 1
- October 2018 1
- November 2016 1
- October 2016 1
- September 2016 4
- August 2016 1
- May 2015 1
- October 2013 1