Category Archive: Uncategorized

Start-Up AND Scale Up: Entrepreneurship is Central to Job Creation

Insights on Building the Wisconsin Tech Economy

Don SchlidtDedicated Computing’s President & CEO, Don Schlidt offers a point of view on job creation in Wisconsin, advocating that growth is as much about scaling established businesses as it is driving new start-ups. Don brings more than 30 years of business experience and 20 years of technology industry experience leading high-performance organizations. In addition to serving on the Dedicated Computing Board of Directors, Schlidt currently serves on the Technical Advisory Board for Intel Corporation as well as the Wisconsin Technology Council.

To be clear, I am a big fan of Wisconsin as a place for entrepreneurs to start businesses! Wisconsin has done a lot to make the business environment more advantageous for start-ups. As an evolving tech-based economy, Wisconsin has developed some really smart ideas about how to make it easier to be an entrepreneur.

In addition, more than a hundred state-based VC firms drive steady start-up investments. Many of them began their firms in other parts of the country and then opted to capitalize on all the Midwest has to offer in terms of family lifestyle and moderate cost of living. And there is no question that a lot of today’s rhetoric is centered on the power of start-ups as job creators, as they certainly are, but we need to remember established firms have an equally critical role to play in fueling growth in employment with their ability to scale.

The proof is in the numbers. Today in Wisconsin, according to statistics from both the Department of Workforce Development and Wisconsin Economic Development Corporation, approximately 90% of jobs created come from existing, entrepreneurial companies (scale-ups) that are driving steady, continued growth. This is not a new concept, but it is one that does not seem to get nearly as much press or attention as start-ups and mega-deals. One of the best-known advocates for the concept of scale-up growth was Andy Grove, former CEO and Chairman of Intel Corporation. Upon surveying the Bay Area’s ecosystem of start-ups, he pointed out that there was a critical need to ensure that public funding and grants directed at start-ups, which aimed at one to three new jobs over a two-year period, would be matched with similar funds directed to scale-up firms poised to grow from three jobs to 300!!

If one of our goals here in Wisconsin is to attract and retain talent, we need to understand and embrace these dynamics – and appropriately balance our investments in start-ups as well as scale-ups. We will attract a more diverse talent base, protect our early investments, and generate a broader variety of the types and levels of jobs available to our residents. This is a smart, pay-it-forward strategy that not only values the business contribution of entrepreneurs but also recognizes that economic development comes from our established community of businesses growing the employment base throughout the State of Wisconsin.

I am not advocating pulling resources from start-ups, but rather a balanced approach to economic development. “Balance” means we begin at the start-up, invest and shore it up for greater growth. We also balance those investments by making similar investments in established companies already prepared to scale, protecting and accelerating their opportunities as well.

If one of Wisconsin’s important economic messages is that we help entrepreneurs create jobs, we must not lose sight of that entrepreneur until he or she has scaled their company and grown their job base to a sustainable level. The scale-up process relies on an entirely different set of skills and resources than new business ventures in start-up mode.

This post is featured in the latest Accelerate, a publication from the Waukesha County Business Alliance. Click here to read the full publication. 

Interested in learning more about Wisconsin Entrepreneurship? Read The Good News About Wisconsin Entrepreneurship from John Koskinen, Chief Economist for the Wisconsin Department of Revenue, in January’s Accelerate starting on page six. To read more from Dedicated Computing, visit our blog.

Whitelisting, Blacklisting, and Deep Freeze

By Bill Gray, Senior Systems Engineer, Dedicated Computing

You want to keep your embedded system secure, so you decide to install a virus scanner. However, the device does not have an internet connection, and you can’t afford to send a service person to update the virus definitions as often as is needed, so is it really worth it? There are three approaches to safeguarding your device; understanding which practice best fits your system will lower costs and ensure your device is secure.

Blacklisting

Typical virus scanners use a blacklisting technology where a list of code signatures is kept — and when any code includes those signatures the application is quarantined.  Utilizing a blacklisting approach on embedded systems has a number of downsides, including the frequency of updates for true protection, or performance impacts to the quarantined system can be significant.  

Whitelisting

Whitelisting, on the other hand, needs no virus definition updates, has no impact on performance, and only allows what has been defined as “good” to be executed, with no other applications being impacted.  An operator using an embedded system will not be allowed to run any rogue code and will not be allowed to infect the embedded system. 

Anti-virus products work by creating a list of programs and code signatures that are known to contain rogue software and then search an operating system, including its file system and memory space.  The anti-virus engine can often identify and compartmentalize software just after having been executed, in (somewhat) real-time. Anti-virus databases must be updated often to address the most recent malware and exploits.  Zero-day vulnerabilities software that has yet to be fixed or otherwise identified as susceptible by the community at large, are impervious to anti-virus products.

Whitelisting is a technology opposite to the implementation of anti-virus scanning techniques.  Instead of locating and removing known vulnerabilities, the whitelisting approach simply refuses to load and execute any file not already allowed. This is accomplished by creating within a secure environment, a list of files (executable or otherwise) and their respective cryptographic checksums.  A whitelisting kernel driver or shim is loaded during boot and is responsible for intercepting all filesystem reads/writes.  For a file to be successfully loaded, it must first pass all security restrictions implemented by the whitelisting product. The result is a secure system that has no performance impacts, needs no virus definition updates, and can protect against zero-day attacks.  Blacklisting is a perfect fit for general purpose computing.  Purpose-specific computing, on the other hand, must operate without IT oversight, and do so within very specific operating requirements — making whitelisting a highly effective security solution for an embedded system.

Deep Freeze

Another approach to protecting the state of a deployed system is accomplished through a product called Deep Freeze.  Deep Freeze is a proprietary software product developed by Faronics, available for Microsoft Windows and macOS operating systems.  The software protects the operating system (at the hard drive level) by implementing a mechanism similar to Copy-on-Write.  In this environment, the user is not allowed to modify the hard drive contents (OS or data, per configuration directives) and all changes are redirected to an overlay filesystem, most likely residing in main memory.  On reboot, all changes are lost, and the operating system reverts back to the original state.  Changes can be accomplished by first removing the protection mechanism (a process called “thawing”), making the necessary OS changes and then applying the lock (“freeze”).

Deep Freeze does not protect the system from running malicious software after a reboot. This protection scheme does not protect a user from using alternative bootable media (e.g. USB flash drive) which would allow modifications to the “frozen” operating system drive contents.

Secure/Trusted Booting

By Wade Brown, Senior Research and Design Engineer, Dedicated Computing

Secure boot is a component of the UEFI firmware package and does not exist in legacy BIOS implementations.  The intended use of this component is to protect against boot kits (as opposed to root kits, which target the OS).  The general boot path is:  computer starts, UEFI loads, seeks first bootable device and loads it’s boot sector.  Depending on the architecture, the boot sector points to software (i.e. first stage boot loader) that will eventually load and execute.  On a non-secure boot enabled system, the software is loaded and executed without verification.

UEFI provides non-volatile, private storage space that can be used to store public key infrastructure (PKI) based certificates. With secure boot enabled, the UEFI firmware will verify the boot loader has been digitally signed, has not been modified and the signature matches one of the certificates stored in NVRAM.  If the boot loader fails verification, it will not be loaded and executed, and the boot process stops.

A boot loader implementing the Trusted Boot methodology, only concerns itself with the verification of the next software component. For example, in Windows and Linux operating systems, this next component is often called the “kernel.” The kernel, in turn, can use Trusted Boot to verify every driver and other software components.  In this manner, an anti-virus engine can be loaded before any other 3rd party driver/software. Trusted Boot normally uses a hardware component called a Trusted Platform Module (TPM), which is a small microprocessor dedicated to cryptographic functions, including integrated PKI keys.

The Keys to Unlocking Self-Encrypting Drives

Wade Brown, Senior Research and Design Engineer

By Wade Brown, Senior Research and Design Engineer at Dedicated Computing

Hardware based encrypting storage drives utilize cryptographic logic directly on the controller chip. Any hard drive using this technology is constantly encrypting and decrypting data, regardless of whether the drive has been locked. Unlike software-based encrypting techniques, hardware crypto engines are faster and transparent to the end user.

Self-encrypting drives (SEDs) are locked by providing a password, which is used to encrypt the hard drive’s internal private key. After the hard drive has been reset or power cycled, the password must be provided to decrypt the private key. If the correct password has been provided, the hard drive media can be accessed normally. Otherwise, all data including master boot records are inaccessible.

There are three typical methods of unlocking a SED. The first method, if supported by the motherboard BIOS, makes use of the ATA Security feature. On boot, a BIOS can ask the user to provide a password to “unlock” the drive. A SED can use this feature to decrypt the private key. However, each hard drive implementing the ATA Security feature must be unlocked, as well.

The second method uses the SED concept, called “MBR Shadowing”. This method involves the use of a small, embedded operating system stored on the SED. On boot, a fake MBR is presented to the BIOS which will then load the embedded OS. The embedded OS is designed to either ask the user for the password or to contact a central hub (e.g. software-based an IT security server) for credentials. On successful authentication, the embedded OS reboots and when the BIOS scans the drive again, the “real” MBR is presented. The embedded OS can unlock other SEDs in the same system.

Finally, the last method does not lock the entire drive or does not specifically lock the boot drives. In this case, the booting operating system is responsible for unlocking other drives (i.e. data drives) in the system. This method is ideal if the operating system drives are located internally while the data drives are accessible from the front of the chassis (e.g. removable drives). Theft of a data drive will be useless as when the drive loses power, it must be unlocked again.

Data on SEDs can be easily destroyed by simply asking the drive to generate a new private key. As previously stated, a SED constantly encrypts/decrypts data using the private key and when this key is regenerated, all previously stored data is immediately lost.

The encryption and locked functionality follows the OPAL 1.x or 2.x Trusted Computing Group (TCG) specifications. This specification defines the protocol for encryption devices and also defines the capability to lock specific LBA ranges on the storage media. Thus, it is possible to encrypt an entire hard drive or only specific block ranges. The OPAL specification also defines two user roles (admin, user), each with individual passwords. The OPAL specification is generally directed toward consumer grade storage devices (SATA, NVMe) while another specification, named “EnterpriseHardware-based targets enterprise grade storage devices (SAS).

Smart Design and Market Leadership: Go Hand-in-Hand as Requirements for Powering the OEM/ODM Relationship

How and when to capitalize on ODM resources for a competitive-edge with embedded computing

Leading Original Design Manufacturers (ODMs) of embedded computing systems work diligently to protect the OEM from making costly mistakes that could occur along the way within the OEM’s product development process. ODMs focus on solving the unknown and unpredictable challenges common with on-going R&D.

This approach enables the OEM product development team to focus on their mission critical application and holistic product design, not the performance of the hardware. ODMs drive improved system performance, reduced time-to-market, and market leadership.

“Smart Design and Market Leadership: 2 Requirements for Powering the OEM/ODM Relationship” provides OEMs a simple road map for how to get more out of your industrial PC supplier. OEMs requiring embedded computing systems with components off the shelf (COTS) should be leaning on their partner relationship for in-house capabilities such as engineering, system design, and tolerance testing for thermal, electrical and acoustic performance.

Engineering product success starts early in the design process. ODMs can add a unique development perspective – a holistic approach that is both collaborative and engineering-focused, helping OEMs meet end-user goals on day-one and throughout the entire life of the OEM’s product.

With awareness to the product requirements such as performance targets, environmental challenges, quality expectations, and time to market, ODMs can value-engineer with smart off-the-shelf components as well as decades of supply chain relationships to handle extended product life and change management. When off-the-shelf is not an option, ODMs offer in-house electrical, mechanical, and software engineering expertise to meet otherwise impossible needs with custom designs for enclosures, electronics and applications.

ODMs focus on solving these kinds of problems with ongoing R&D – continually asking themselves ‘what computational needs are the OEMs going to require before they know they need it?’ Value-added relationships are the result, protecting OEM resources, enhancing system performance, reducing time-to-market, and creating market leadership.

Click HERE to read more. Then connect Engineer-to-Engineer with Dedicated Computing if you’re exploring the new standard for a PC supplier partnership.

Swing by our LIBRARY Section for additional captivating content for Global OEMs.

4 Great Questions To Ask When Choosing Your Industrial PC Supplier

When choosing an industrial PC supplier for your mission-critical product design, consider these four questions before you pull the trigger.

4 Key questions to ask when selecting a supplier for embedded computing:

  1. Can they prioritize smart purchase lifecycles? Choose a partner that understands the long-life demands of embedded, compute-intensive design.
  2. Proactive Change Management – Can they speak to best practices for eliminating unnecessary, and costly changes?
  3. Collaboration – Do they offer the design expertise and personal attention to help get your product to market?
  4. Do they understand the mission-critical value of your product — offering market that fuels a time-to-market advantage?

Build For Life

If you’re an OEM developing life-improving and life-saving devices, each computing problem you solve directly impacts the quality of your product and customer satisfaction. Partnering with an ODM will save you valuable resources that you can apply to developing other critically important products. Strategic partnerships with ODMs offer:

PC Supplier Checklist
Download the Build for Life white paper to read more
  • Intelligent lifecycle management: Choose ODMs that anticipate embedded technology availability for up to 7–10 years in the future, serving the long-life demands of highly regulated medical and life science–related systems
  • Proactive change management: Distinguish ODMs by their proactive change management strategies  that reduce risk and eliminate unnecessary or costly changes by providing product modularity and easier design adaptation for future iterations
  • Smart, collaborative design: An ODM should configure and validate systems early in product design, helping OEMs avoid issues later on and smoothing out production issues in the long run
  • Application-specific value: The ODM should work with the OEM to provide comprehensive hardware, software, and service solutions for each specific market vertical

Click the image above or click HERE to read more.

Dedicated Computing

Choosing the right ODM partner to develop your specialized products can save you time, money, and headaches throughout their life cycle.

Contact us to discover how Dedicated Computing can help launch your products and manage ongoing change in ways that will benefit your bottom line and ensure great customer experiences.