Sunday, May 13, 2012

Beginners guide to the use of indexes for improving search performance

As you know one of the most important techniques when designing your database schema or your searching enabled systems is the selection and creation of indexes for increasing the performance of your various queries. What you may not know is why and how indexes help.

An index is in simple terms  an optimized data structure , that allows searching our original structure in a more efficient way.

As a common example, imagine any book you have. You have the main structure which is the book content. But you also normally have two extra structures, the Table of Contents and the Index at the end.

The Table of Contents is an structure that help us find chapters and sections fast by the title, so we say that the book is indexed by chapter and section  title.

The Index help us find pages by particular words. So the book is also indexed by words.

Using any of these indexes will allow us to find what we need faster as without them we would need to scan the whole book until we find what we want.

Indexes help us find data fast, but as you can see from the analogy, they also add some extra space (memory) cost and writing cost as they need to maintain an extra up to date structure. Paying attention we can also see that creating a new entry in the Index needs to be ordered so it cannot be a simple insertion at the end of the index, but it needs to be in the right position.

The idea is the same for a DB or for a searching engine. We have our main data structure (which is the tables and columns of our model in case of DB and our text content in case of search engines). And if we create an index we are creating another structure that will allow fast searching on the main structure.

We’ll now make a little example in Java of how this would work for a full text search that index by words of a book. We start we a Book and a Page class


public class Book {
    private List<Page> pages = new ArrayList<>();

    public List<Page> getPages() {
        return pages;
    }

    public void addPage(Page page) {
        this.pages.add(page);
    }  
}



public class Page {
    String content;
    int number;
   
    public Page(String content,int number){
        this.content = content;
        this.number = number;
    }

    public String getContent() {
        return content;
    }

    public int getNumber() {
        return number;
    }
   
}



So let’s make an test example for a book with 10000 pages (I know it is a big fake book) and then try to find a word there:


We create a finder



public class NonIndexBookFinder implements BookFinder {


    private Book book;
    public NonIndexBookFinder(Book book){
        this.book = book;
    }
    @Override
    public Page findPage(String word) {
        for(Page page : book.getPages()){
            if(page.getContent().contains(word)){
                return page;
            }
        }
        return null;
    }


}


No we create a IndexFinder based on a Hash Map.


public class IndexBookFinder implements BookFinder {

    private Map<String, Page> index = new HashMap<>();

    public IndexBookFinder(Book book, String[] wordsToIndex) {
        for (Page page : book.getPages()) {
            for (String word : wordsToIndex) {
                if (page.getContent().contains(word)) {
                    index.put(word, page);
                }
            }
        }
    }

    @Override
    public Page findPage(String word) {
        return index.get(word);
    }

}


We can see that the cost is in the construction of the index now. but the search is a O(1) operation.

Creating our testing code


public class NoIndexBookTest {


    public static void main(String[] args) {
        final Book book =  new Book();
        for(int i=0;i<10000;i++){
            Page page = new Page(createContentBasedOnIndex(i), i);
            book.addPage(page);
        }
        final BookFinder finder = new NonIndexBookFinder(book);
        timedFind(new Runnable() {
           
            @Override
            public void run() {
                Page page = finder.findPage("tolook");
                System.out.println("PAGE: "+page.getNumber());
            }
        });    
       
        final BookFinder finder2 = new IndexBookFinder(book,new String[]{"tolook"});
        timedFind(new Runnable() {
           
            @Override
            public void run() {
                Page page = finder2.findPage("tolook");
                System.out.println("PAGE: "+page.getNumber());
            }
        });
       
    }


    private static String createContentBasedOnIndex(int pageNo) {
        StringBuilder string = new StringBuilder();
        for(int i=0;i<5000;i++){
            string.append(" palabra ");
            if(pageNo==9900 && i==4000){
                string.append("tolook");   
            }
        }
        return string.toString();
    }
   
    private static void timedFind(Runnable runnable){
        long init = System.currentTimeMillis();
        runnable.run();
        long end = System.currentTimeMillis();
        System.out.println(end-init);
    }


}

And executing we get:

PAGE: 9900
268
PAGE: 9900
0



As we can see the indexed version executes the search in less than 1 milisecond compared to the 270 miliseconds the first one takes.

This is a very simple hashmap based index implementation, but that is the idea. Of course it is missing almost all of the functionality, but the idea is just to see that an alternative structure is used to search into the main data in a faster way.

Normally Database implementations use a B-Tree  to implement their indexes instead of a Hashmap as used here. B Tree definition can be found on wikipedia here http://en.wikipedia.org/wiki/B-tree”>B-Tree . The main point is that they allow searches, inseritons, deletions in logarithimc time. And they allow efficient searches by ranges, direct, sorted, etc.


I'll try to follow this post with a little more realistic index example based on B Tree.

Monday, May 7, 2012

ruby-recommender gem

I have just started developing a Ruby Gem for a recommendation engine based on concepts from  Apache Mahout.

The gem is located in here github. At the moment it is very basic. An example of usage follows:



require 'recommendations'

       
    def save_as_csv_file(file_path,values)
       File.open(file_path,'w') do |file|
         values.each do |row|
           file.puts "#{row[0]},#{row[1]},#{row[2]}"
         end
       end
    end  
   
    save_as_csv_file '/tmp/data_file',[['A','B',5],['A','C',3],['B','B',5],['B','C',3],['B','D',2]]
    data_model = Recommendations::DataModel::FileDataModel.new('/tmp/data_file')
    similarity = Recommendations::Similarity::EuclideanDistanceSimilarity.new(data_model)
    neighborhood = Recommendations::Similarity::Neighborhood::NearestNUserNeighborhood.new(data_model,similarity,5,0.5)
    rating_estimator = Recommendations::Recommender::Estimation::DefaultRatingEstimator.new(data_model,similarity)
    recommender = Recommendations::Recommender::GenericUserBasedRecommender.new(data_model,similarity,neighborhood,rating_estimator)
    recommendations = recommender.recommend('A',5)
    puts recommendations[0].item
    puts recommendations[0].value


I will be trying to expand it to do more things. The first of them will be adding MongoDB support as this is what I need for my project.