Collection

Percentage of eDiscovery Sanctions Cases Declining – eDiscovery Trends

According to Kroll Ontrack, the percentage of eDiscovery cases addressing sanctions “dropped by approximately ten percent” compared to 2011, while “cases addressing procedural issues more than doubled”.  Let’s take a closer look at the numbers and look at some cases in each category.

As indicated in their December 4 news release, in the past year, Kroll Ontrack experts summarized 70 of the most significant state and federal judicial opinions related to the preservation, collection, review and production of electronically stored information (ESI). The breakdown of the major issues that arose in these eDiscovery cases is as follows:

  • Thirty-two percent (32%) of cases addressed sanctions regarding a variety of issues, such as preservation and spoliation, noncompliance with court orders and production disputes.  Out of 70 cases, that would be about 22 cases addressing sanctions this past year.  Here are a few of the recent sanction cases previously reported on this blog.
  • Twenty-nine percent (29%) of cases addressed procedural issues, such as search protocols, cooperation, production and privilege considerations.  Out of 70 cases, that would be about 20 cases.  Here are a few of the recent procedural issues cases previously reported on this blog.
  • Sixteen percent (16%) of cases addressed discoverability and admissibility issues.  Out of 70 cases, that would be about 11 cases.  Here are a few of the recent discoverability / admissibility cases previously reported on this blog.
  • Fourteen percent (14%) of cases discussed cost considerations, such as shifting or taxation of eDiscovery costs.  Out of 70 cases, that would be about 10 cases.  Here are a few of the recent eDiscovery costs cases previously reported on this blog.
  • Nine percent (9%) of cases discussed technology-assisted review (TAR) or predictive coding.  Out of 70 cases, that would be about 6 cases.  Here are a few of the recent TAR cases previously reported on this blog, how many did you get?

While it’s nice and appreciated that Kroll Ontrack has been summarizing the cases and compiling these statistics, I do have a couple of observations/questions about their numbers (sorry if they appear “nit-picky”):

  • Sometimes Cases Belong in More Than One Category: The case percentage totals add up to 100%, which would make sense except that some cases address issues in more than one category.  For example, In re Actos (Pioglitazone) Products Liability Litigation addressed both cooperation and technology-assisted review, and Freeman v. Dal-Tile Corp. addressed both search protocols and discovery / admissibility.  It appears that Kroll classified each case in only one group, which makes the numbers add up, but could be somewhat misleading.  In theory, some cases belong in multiple categories, so the total should exceed 100%.
  • Did Cases Addressing Procedural Issues Really Double?: Kroll reported that “cases addressing procedural issues more than doubled”; however, here is how they broke down the category last year: 14% of cases addressed various procedural issues such as searching protocol and cooperation, 13% of cases addressed various production considerations, and 12% of cases addressed privilege considerations and waivers.  That’s a total of 39% for three separate categories that now appear to be described as “procedural issues, such as search protocols, cooperation, production and privilege considerations” (29%).  So, it looks to me like the percentage of cases addressing procedural issues actually dropped 10%.  Actually, the two biggest category jumps appear to be discoverability and admissibility issues (2% last year to 16% this year) and TAR (0% last year to 9% this year).

So, what do you think?  Has your organization been involved in any eDiscovery opinions this year?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Louisiana Order Dictates That the Parties Cooperate on Technology Assisted Review – eDiscovery Case Law

During this Thanksgiving week, we at eDiscovery Daily thought it would be a good time to catch up on some cases we missed earlier in the year.  So, we will cover a different case each day this week.  Enjoy!

In the case In re Actos (Pioglitazone) Products Liability Litigation, No. 6:11-md-2299, (W.D. La. July 27, 2012), a case management order applicable to pretrial proceedings in a multidistrict litigation consolidating eleven civil actions, the court issued comprehensive instructions for the use of technology-assisted review (“TAR”).

In an order entitled “Procedures and Protocols Governing the Production of Electronically Stored Information (“ESI”) by the Parties,” U.S. District Judge Rebecca Doherty of the Western District of Louisiana set forth how the parties would treat data sources, custodians, costs, and format of production, among others. Importantly, the order contains a “Search Methodology Proof of Concept,” which governs the parties’ usage of TAR during the search and review of ESI.

The order states that the parties “agree to meet and confer regarding the use of advanced analytics” as a “document identification mechanism for the review and production of . . . data.” The parties will meet and confer to select four key custodians whose e-mail will be used to create an initial sample set, after which three experts will train the TAR system to score every document based on relevance. To quell the fears of TAR skeptics, the court provided that both parties will collaborate to train the system, and after the TAR process is completed, the documents will not only be randomly sampled for quality control, but the defendants may also manually review documents for relevance, confidentiality, and privilege.

The governance order repeatedly emphasizes that the parties are committing to collaborating throughout the TAR process and requires that they meet and confer prior to contacting the court for a resolution.

So, what do you think?  Should more cases issue instructions like this?  Please share any comments you might have or if you’d like to know more about a particular topic.

Case Summary Source: Applied Discovery (free subscription required).  For eDiscovery news and best practices, check out the Applied Discovery Blog here.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Daily is Two Years Old Today!

 

It’s hard to believe that it has been two years ago today since we launched the eDiscoveryDaily blog.  Now that we’ve hit the “terrible twos”, is the blog going to start going off on rants about various eDiscovery topics, like Will McAvoy in The Newsroom?   Maybe.  Or maybe not.  Wouldn’t that be fun!

As we noted when recently acknowledging our 500th post, we have seen traffic on our site (from our first three months of existence to our most recent three months) grow an amazing 442%!  Our subscriber base has nearly doubled in the last year alone!  We now have nearly seven times the visitors to the site as we did when we first started.  We continue to appreciate the interest you’ve shown in the topics and will do our best to continue to provide interesting and useful eDiscovery news and analysis.  That’s what this blog is all about.  And, in each post, we like to ask for you to “please share any comments you might have or if you’d like to know more about a particular topic”, so we encourage you to do so to make this blog even more useful.

We also want to thank the blogs and publications that have linked to our posts and raised our public awareness, including Pinhawk, The Electronic Discovery Reading Room, Unfiltered Orange, Litigation Support Blog.com, Litigation Support Technology & News, Ride the Lightning, InfoGovernance Engagement Area, Learn About E-Discovery, Alltop, Law.com, Justia Blawg Search, Atkinson-Baker (depo.com), ABA Journal, Complex Discovery, Next Generation eDiscovery Law & Tech Blog and any other publication that has picked up at least one of our posts for reference (sorry if I missed any!).  We really appreciate it!

We like to take a look back every six months at some of the important stories and topics during that time.  So, here are some posts over the last six months you may have missed.  Enjoy!

We talked about best practices for issuing litigation holds and how issuing the litigation hold is just the beginning.

By the way, did you know that if you deleted a photo on Facebook three years ago, it may still be online?

We discussed states (Delaware, Pennsylvania and Florida) that have implemented new rules for eDiscovery in the past few months.

We talked about how to achieve success as a non-attorney in a law firm, providing quality eDiscovery services to your internal “clients” and how to be an eDiscovery consultant, and not just an order taker, for your clients.

We warned you that stop words can stop your searches from being effective, talked about how important it is to test your searches before the meet and confer and discussed the importance of the first 7 to 10 days once litigation hits in addressing eDiscovery issues.

We told you that, sometimes, you may need to collect from custodians that aren’t there, differentiated between quality assurance and quality control and discussed the importance of making sure that file counts add up to what was collected (with an example, no less).

By the way, did you know the number of pages in a gigabyte can vary widely and the same exact content in different file formats can vary by as much as 16 to 20 times in size?

We provided a book review on Zubulake’s e-Discovery and then interviewed the author, Laura Zubulake, as well.

BTW, eDiscovery Daily has had 150 posts related to eDiscovery Case Law since the blog began.  Fifty of them have been in the last six months.

P.S. – We still haven't missed a business day yet without a post.  Yes, we are crazy.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Best Practices: Quality Control, Making Sure the Numbers Add Up

 

Yesterday, we wrote about tracking file counts from collection to production, the concept of expanded file counts, and the categorization of files during processing.  Today, let’s walk through a scenario to show how the files collected are accounted for during the discovery process.

Tracking the Counts after Processing

We discussed the typical categories of excluded files after processing – obviously, what’s not excluded is available for searching and review.  Even if your approach includes a technology assisted review (TAR) methodology such as predictive coding, it’s still likely that you will want to do some culling out of files that are clearly non-responsive.

Documents during review may be classified in a number of ways, but the most common ways to classify documents as to whether they are responsive, non-responsive, or privileged.  Privileged documents are also typically classified as responsive or non-responsive, so that only the responsive documents that are privileged need be identified on a privilege log.  Responsive documents that are not privileged are then produced to opposing counsel.

Example of File Count Tracking

So, now that we’ve discussed the various categories for tracking files from collection to production, let’s walk through a fairly simple eMail based example.  We conduct a fairly targeted collection of a PST file from each of seven custodians in a given case.  The relevant time period for the case is January 1, 2010 through December 31, 2011.  Other than date range, we plan to do no other filtering of files during processing.  Duplicates will not be reviewed or produced.  We’re going to provide an exception log to opposing counsel for any file that cannot be processed and a privilege log for any responsive files that are privileged.  Here’s what this collection might look like:

  • Collected Files: 101,852 – After expansion, 7 PST files expand to 101,852 eMails and attachments.
  • Filtered Files: 23,564 – Filtering eMails outside of the relevant date range eliminates 23,564 files.
  • Remaining Files after Filtering: 78,288 – After filtering, there are 78,288 files to be processed.
  • NIST/System Files: 0 – eMail collections typically don’t have NIST or system files, so we’ll assume zero files here.  Collections with loose electronic documents from hard drives typically contain some NIST and system files.
  • Exception Files: 912 – Let’s assume that a little over 1% of the collection (912) is exception files like password protected, corrupted or empty files.
  • Duplicate Files: 24,215 – It’s fairly common for approximately 30% of the collection to include duplicates, so we’ll assume 24,215 files here.
  • Remaining Files after Processing: 53,161 – We have 53,161 files left after subtracting NIST/System, Exception and Duplicate files from the total files after filtering.
  • Files Culled During Searching: 35,618 – If we assume that we are able to cull out 67% (approximately 2/3 of the collection) as clearly non-responsive, we are able to cull out 35,618 files.
  • Remaining Files for Review: 17,543 – After culling, we have 17,543 files that will actually require review (whether manual or via a TAR approach).
  • Files Tagged as Non-Responsive: 7,017 – If approximately 40% of the document collection is tagged as non-responsive, that would be 7,017 files tagged as such.
  • Remaining Files Tagged as Responsive: 10,526 – After QC to ensure that all documents are either tagged as responsive or non-responsive, this leaves 10,526 documents as responsive.
  • Responsive Files Tagged as Privileged: 842 – If roughly 8% of the responsive documents are privileged, that would be 842 privileged documents.
  • Produced Files: 9,684 – After subtracting the privileged files, we’re left with 9,684 responsive, non-privileged files to be produced to opposing counsel.

The percentages I used for estimating the counts at each stage are just examples, so don’t get too hung up on them.  The key is to note the numbers in red above.  Excluding the interim counts in black, the counts in red represent the different categories for the file collection – each file should wind up in one of these totals.  What happens if you add the counts in red together?  You should get 101,852 – the number of collected files after expanding the PST files.  As a result, every one of the collected files is accounted for and none “slips through the cracks” during discovery.  That’s the way it should be.  If not, investigation is required to determine where files were missed.

So, what do you think?  Do you have a plan for accounting for all collected files during discovery?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Best Practices: Quality Control, It’s a Numbers Game

 

Previously, we wrote about Quality Assurance (QA) and Quality Control (QC) in the eDiscovery process.  Both are important in improving the quality of work product and making the eDiscovery process more defensible overall.  For example, in attorney review, QA mechanisms include validation rules to ensure that entries are recorded correctly while QC mechanisms include a second review (usually by a review supervisor or senior attorney) to ensure that documents are being categorized correctly.  Another overall QC mechanism is tracking of document counts through the discovery process, especially from collection to production, to identify how every collected file was handled and why each non-produced document was not produced.

Expanded File Counts

Scanned counts of files collected are not the same as expanded file counts.  There are certain container file types, like Outlook PST files and ZIP archives that exist essentially to store a collection of other files.  So, the count that is important to track is the “expanded” file count after processing, which includes all of the files contained within the container files.  So, in a simple scenario where you collect Outlook PST files from seven custodians, the actual number of documents (emails and attachments) within those PST files could be in the tens of thousands.  That’s the starting count that matters if your goal is to account for every document in the discovery process.

Categorization of Files During Processing

Of course, not every document gets reviewed or even included in the search process.  During processing, files are usually categorized, with some categories of files usually being set aside and excluded from review.  Here are some typical categories of excluded files in most collections:

  • Filtered Files: Some files may be collected, and then filtered during processing.  A common filter for the file collection is the relevant date range of the case.  If you’re collecting custodians’ source PST files, those may include messages outside the relevant date range; if so, those messages may need to be filtered out of the review set.  Files may also be filtered based on type of file or other reasons for exclusion.
  • NIST and System Files: Many file collections also contain system files, like executable files (EXEs) or Dynamic Link Library (DLLs) that are part of the software on a computer which do not contain client data, so those are typically excluded from the review set.  NIST files are included on the National Institute of Standards and Technology list of files that are known to have no evidentiary value, so any files in the collection matching those on the list are “De-NISTed”.
  • Exception Files: These are files that cannot be processed or indexed, for whatever reason.  For example, they may be password-protected or corrupted.  Just because these files cannot be processed doesn’t mean they can be ignored, depending on your agreement with opposing counsel, you may need to at least provide a list of them on an exception log to prove they were addressed, if not attempt to repair them or make them accessible (BTW, it’s good to establish that agreement for disposition of exception files up front).
  • Duplicate Files: During processing, files that are exact duplicates may be put aside to avoid redundant review (and potential inconsistencies).  Some exact duplicates are typically identified based on the HASH value, which is a digital fingerprint generated based on the content and format of the file – if two files have the same HASH value, they have the same exact content and format.  Emails (and their attachments) may be identified as duplicates based on key metadata fields, so an attachment cannot be “de-duped” out of the collection by a standalone copy of the same file.

All of these categories of excluded files can reduce the set of files to actually be searched and reviewed.  Tomorrow, we’ll illustrate an example of a file set from collection to production to illustrate how each file is accounted for during the discovery process.

So, what do you think?  Do you have a plan for accounting for all collected files during discovery?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Trends: iDiscovery

 

Yesterday was a day that tech enthusiasts and ordinary people alike had circled on their calendar since it was confirmed as the date of Apple’s press event to unveil the iPhone5. Apple proudly boasted that it had sold 400 million iOS devices by the end of June of this year, which can in part be attributed to the smoothly operating software running on their devices. Advances in mobile high tech have made these portable computers accessible and their presence inescapable even among late adopters. What is simple and intuitive from a user standpoint, however, can prove challenging and fickle to a computer forensics expert.

Richard Lutkus of Law Technology News writes that there are many factors that must be considered when investigating an iOS device, such as “device model, generation, storage capacity, iOS version, iCloud activation status, and passcode protection status. One example of the importance of these identification questions is iCloud, which is Apple's information syncing service. The presence and status of this service may be important because information found on an iOS device could be automatically synced to one or several computers or other iOS devices.”

Lutkus points out that mobile device forensics requires skills that not all computer forensic professionals possess. For example, an important part of the preservation of a mobile device is isolation from all data networks to ensure no changes occur on the device, such as a remote wipe.  A few ways of isolating the hardware include a signal blocking “Faraday” bag, removing the SIM card, or enabling airplane mode. From there, the two methods of imaging an iOS device are logical capture and physical imaging. Lutkus explains, “Logical capture is the preservation of all active (no file fragments or other ephemera) files on a device. This method is similar to an iTunes backup in that it saves the same types of data as iTunes backups. In contrast, physical imaging captures everything that a logical capture does, but includes deleted file fragments, temporary cache files, and other ephemera. Generally, physical imaging is more desirable if it is technically possible. Though slower, this approach is widely accepted, is compatible with most forensic tools, and preserves all data on a device.”

There are other challenges in preserving the data in an iOS device, especially hardware newer than the iPhone 4S and Ipad 3, which include encryption of even unallocated space of memory when a passcode has been used. The quality of data one might expect to find after imaging and decrypting could include: “contacts, call logs, speed dials, voicemail, Bluetooth devices, screenshots, bookmarks, web clips, calendars, messages, email, attachments, internet history, internet cookies, photos, audio recordings, notes, videos, music, app list, keystroke information, GPS coordinates, wi-fi network memberships, user names and passwords, map searches, app-specific data, cell tower information, serial number, device name, device IMEI (international mobile equipment identify number), device serial number, version, and generation, etc.” This type of information can be crucial in the first 7 to 10 days after litigation hits, as we have previously covered here. These devices seem to know so much about us that companies like Apple have had to release statements to state they are not recording and storing your location in response to allegations of privacy invasion. With the number of expected sales of the iPhone5 potentially adding between a quarter and a half percent to America’s GDP, there will be millions more iPhone’s will making their ways into the hands of consumers and eventually, no doubt, into the hands of mobile forensic experts.

So, what do you think? Have mobile computing devices, such as smartphones and tablets, been material to your eDiscovery work? Have other mobile operating systems, such as Blackberry or Android, presented challenges that differ from iOS? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Milestones: Our 500th Post!

One thing about being a daily blog is that the posts accumulate more quickly.  As a result, I’m happy to announce that today is our 500th post on eDiscoveryDaily!  In less than two years of existence!

When we launched on September 20, 2010, our goal was to be a daily resource for eDiscovery news and analysis and we have done our best to deliver on that goal.  During that time, we have published 144 posts on eDiscovery Case Law and have identified numerous cases related to Spoliation Claims and Sanctions.   We’ve covered every phase of the EDRM life cycle, including:

We’ve discussed key industry trends in Social Media Technology and Cloud Computing.  We’ve published a number of posts on eDiscovery best practices on topics ranging from Project Management to coordinating eDiscovery within Law Firm Departments to Searching and Outsourcing.  And, a lot more.  Every post we have published is still available on the site for your reference.

Comparing our first three months of existence with our most recent three months, we have seen traffic on our site grow an amazing 442%!  Our subscriber base has nearly doubled in the last year alone!

And, we have you to thank for that!  Thanks for making the eDiscoveryDaily blog a regular resource for your eDiscovery news and analysis!  We really appreciate the support!

I also want to extend a special thanks to Jane Gennarelli, who has provided some wonderful best practice post series on a variety of topics, ranging from project management to coordinating review teams to learning how to be a true eDiscovery consultant instead of an order taker.  Her contributions are always well received and appreciated by the readers – and also especially by me, since I get a day off!

We always end each post with a request: “Please share any comments you might have or if you’d like to know more about a particular topic.”  And, we mean it.  We want to cover the topics you want to hear about, so please let us know.

Tomorrow, we’ll be back with a new, original post.  In the meantime, feel free to click on any of the links above and peruse some of our 499 previous posts.  Maybe you missed some?  😉

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Best Practices: Assessing Your Data Before Meet and Confer Shouldn’t Be Expensive

 

So, you’re facing litigation and you need help from an outside provider to “get your ducks in a row” to understand how much data you have, how many documents have hits on key terms and estimate the costs to process, review and produce the data so that you’re in the best position to negotiate appropriate terms at the Rule 26(f) conference (aka, meet and confer).  But, how much does it cost to do all that?  It shouldn’t be expensive.  In fact, it could even be free.

Metadata Inventory

Once you’ve collected data from your custodians, it’s important to understand how much data you have for each custodian and how much data is stored on each media collected.  You should also be able to break the collection down by file type and by date range.  A provider should be able to process the data and provide a metadata inventory of the collected electronically stored information (ESI) that enables the inventory to be queried by:

  • Data source (hard drive, folder, or custodian)
  • Folder names and sizes
  • File names and sizes
  • Volume by file type
  • Date created and last date modified

When this done prior to the Rule 26(f) conference, it enables your legal team to intelligently negotiate at the conference by understanding the potential volume (and therefore potential cost) of including or excluding certain custodians, document types, or date ranges in the discovery order. 

Word Index of the Collection

Want to get a sense of how many documents mention each of the key players in the case?  Or, how many mention the key issues?  After a simple index of the data, a provider should be able to at least provide a consolidated report of all the words (not including stop words, of course), from all sources that includes number of occurrences for each word in the collected ESI (at least for files that contain embedded text).  This initial index won’t catch everything – image-only files and exception (e.g., corrupted or password protected) files won’t be included – but it will enable your legal team to intelligently negotiate at the meet and confer by understanding the potential volume (and therefore potential cost) of including or excluding certain key words in the discovery order.

eDiscovery Budget Worksheet

Loading the metadata inventory into an eDiscovery budget worksheet that includes standard performance data (such as document review production statistics) and projected billing rates and costs can provide a working eDiscovery project budget projection for the case.  This projection can enable your legal team to advise their client of projected costs of the case, negotiate cost sharing or cost burden arguments in the meet and confer, and create a better discovery production strategy.

It shouldn’t be expensive to prepare these items to develop an initial assessment of the case to prepare for the Rule 26(f) conference.  In fact, the company that I work for, CloudNine Discovery, provides these services for free.  But, regardless who you use, it’s important to assess your data before the meet and confer to enable your legal team to understand the potential costs and risks associated with the case and negotiate the best possible approach for your client.

So, what do you think?  What analysis and data assessment do you perform prior to the meet and confer?  Please share any comments you might have or if you’d like to know more about a particular topic.

P.S.: No ducks were harmed in the making of this blog post.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery History: Zubulake’s e-Discovery

 

In the 22 months since this blog began, we have published 133 posts related to eDiscovery case law.  When discussing the various case opinions that involve decisions regarding to eDiscovery, it’s easy to forget that there are real people impacted by these cases and that the story of each case goes beyond just whether they preserved, collected, reviewed and produced electronically stored information (ESI) correctly.  A new book, by the plaintiff in the most famous eDiscovery case ever, provides the “backstory” that goes beyond the precedent-setting opinions of the case, detailing her experiences through the events leading up to the case, as well as over three years of litigation.

Laura A. Zubulake, the plaintiff in the Zubulake vs. UBS Warburg case, has written a new book: Zubulake's e-Discovery: The Untold Story of my Quest for Justice.  It is the story of the Zubulake case – which resulted in one of the largest jury awards in the US for a single plaintiff in an employment discrimination case – as told by the author, in her words.  As Zubulake notes in the Preface, the book “is written from the plaintiff’s perspective – my perspective. I am a businessperson, not an attorney. The version of events and opinions expressed are portrayed by me from facts and circumstances as I perceived them.”  It’s a “classic David versus Goliath story” describing her multi-year struggle against her former employer – a multi-national financial giant.

Zubulake begins the story by developing an understanding of the Wall Street setting of her employer within which she worked for over twenty years and the growing importance of email in communications within that work environment.  It continues through a timeline of the allegations and the evidence that supported those allegations leading up to her filing of a discrimination claim with the Equal Employment Opportunity Commission (EEOC) and her subsequent dismissal from the firm.  This Allegations & Evidence chapter is particularly enlightening to those who may be familiar with the landmark opinions but not the underlying evidence and how that evidence to prove her case came together through the various productions (including the court-ordered productions from backup tapes).  The story continues through the filing of the case and the beginning of the discovery process and proceeds through the events leading up to each of the landmark opinions (with a separate chapter devoted each to Zubulake I, III, IV and V), then subsequently through trial, the jury verdict and the final resolution of the case.

Throughout the book, Zubulake relays her experiences, successes, mistakes, thought processes and feelings during the events and the difficulties and isolation of being an individual plaintiff in a three-year litigation process.  She also weighs in on the significance of each of the opinions, including one ruling by Judge Shira Scheindlin that may not have had as much impact on the outcome as you might think.  For those familiar with the opinions, the book provides the “backstory” that puts the opinions into perspective; for those not familiar with them, it’s a comprehensive account of an individual who fought for her rights against a large corporation and won.  Everybody loves a good “David versus Goliath story”, right?

The book is available at Amazon and also at CreateSpace.  Look for my interview with Laura regarding the book in this blog next week.

So, what do you think?  Are you familiar with the Zubulake opinions?  Have you read the book?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

eDiscovery Trends: Need to Catch Up on Trends Over the Last Six Weeks? Take a Time Capsule.

 

I try to set aside some time over the weekend to catch up on my reading and keep abreast of developments in the industry and although that’s sometimes that’s easier said than done, I stumbled across an interesting compilation of legal technology information from my friend Christy Burke and her team at Burke & Company.  On Friday, Burke & Company released The Legal Technology Observer (LTO) Time Capsule on Legal IT Professionals. LTO was a 6 week concentrated collection of essays, articles, surveys and blog posts providing expert practical knowledge about legal technology, eDiscovery, and social media for legal professionals.

The content has been formatted into a PDF version and is available for free download here.  As noted in their press release, Burke & Company's bloggers, including Christy, Melissa DiMercurio, Ada Spahija and Taylor Gould, as well as many distinguished guest contributors, set out to examine the trends, topics and perspectives that are driving today's legal technology world for 6 weeks from June 6 to July 12. They did so with help of many of the industry's most respected experts and LTO acquired more than 21,000 readers in just 6 weeks.  Nice job!

The LTO Time Capsule covers a wide range of topics related to legal technology.  There were several topics that have impact to eDiscovery, some of which included thought leaders previously interviewed on this blog (links to their our previous interviews with them below), including:

  • The EDRM Speaks My Language: Written by – Ada Spahija, Communications Specialist at Burke and Company LLC; Featuring – Experts George Socha and Tom Gelbmann.
  • Learning to Speak EDRM: Written by – Ada Spahija, Communications Specialist at Burke and Company LLC; Featuring – Experts George Socha and Tom Gelbmann.
  • Predictive Coding: Dozens of Names, No Definition, Lots of Controversy: Written by – Sharon D. Nelson, Esq. and John W. Simek.
  • Social Media 101 for Law Firms – Don’t Get Left Behind: Written by – Ada Spahija, Communications Specialist at Burke and Company LLC; Featuring – Kerry Scott Boll of JustEngage.
  • Results of Social Media 101 Snap-Poll: Written by – Ada Spahija, Communications Specialist at Burke and Company LLC.
  • Getting up to Speed with eDiscovery: Written by – Taylor Gould, Communications Intern at Burke and Company LLC; Featuring – Browning Marean, Senior Counsel at DLA Piper, San Diego.
  • LTO Interviews Craig Ball to Examine the Power of Computer Forensics: Written by – Melissa DiMercurio, Account Executive at Burke and Company LLC; Featuring – Expert Craig Ball, Trial Lawyer and Certified Computer Forensic Examiner.
  • LTO Asks Bob Ambrogi How a Lawyer Can Become a Legal Technology Expert: Written by – Melissa DiMercurio, Account Exectuive at Burke and Company LLC; Featuring – Bob Ambrogi, Practicing Lawyer, Writer and Media Consultant.
  • LTO Interviews Jeff Brandt about the Mysterious Cloud Computing Craze: Written by – Taylor Gould, Communications Intern at Burke and Company LLC; Featuring – Jeff Brandt, Editor of PinHawk Law Technology Daily Digest.
  • Legal Technology Observer eDiscovery in America – A Legend in the Making: Written by – Christy Burke, President of Burke and Company LLC; Featuring – Barry Murphy, Analyst with the eDJ Group and Contributor to eDiscoveryJournal.com.
  • IT-Lex and the Sedona Conference® Provide Real Help to Learn eDiscovery and Technology Law: Written by – Christy Burke, President of Burke and Company LLC.

These are just some of the topics, particularly those that have an impact on eDiscovery.  To check out the entire list of articles, click here to download the report.

So, what do you think?  Do you need a quick resource to catch up on your reading?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by CloudNine Discovery. eDiscoveryDaily is made available by CloudNine Discovery solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscoveryDaily should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.