f



Text parser (text into sentences) that works with UTF-8 and multiple languages?

Hi all,

I have to parse about 2000 files that are written in multiple
languages (some English, some Korean, some Arabic and some Japanese).
I have to split these UTF-8 encoded into individual sentences. Has
anyone written a good parser that can parse all these non-Latin
character languages or can someone give me some advice on how to go
about writing a parser that can handle all these fairly different
languages?

Thank you,

Mike

0
7/30/2007 8:46:02 AM
comp.lang.ruby 48886 articles. 0 followers. Post Follow

3 Replies
984 Views

Similar Articles

[PageSpeed] 15

2007/7/30, mike b. <michael.w.bell@gmail.com>:
> I have to parse about 2000 files that are written in multiple
> languages (some English, some Korean, some Arabic and some Japanese).
> I have to split these UTF-8 encoded into individual sentences. Has
> anyone written a good parser that can parse all these non-Latin
> character languages or can someone give me some advice on how to go
> about writing a parser that can handle all these fairly different
> languages?

I would consider doing this in Java, as Java's regular expressions
support Unicode.  That might make the job much easier.  OTOH, if all
files use only dot, question mark etc. (i.e. ASCII chars) as sentence
delimiters then Ruby's regular expressions might as well do the job.

Kind regards

robert

0
shortcutter (5830)
7/30/2007 9:26:00 AM
On Jul 30, 11:26 am, "Robert Klemme" <shortcut...@googlemail.com>
wrote:
> 2007/7/30, mike b. <michael.w.b...@gmail.com>:
>
> > I have to parse about 2000 files that are written in multiple
> > languages (some English, some Korean, some Arabic and some Japanese).
> > I have to split these UTF-8 encoded into individual sentences. Has
> > anyone written a good parser that can parse all these non-Latin
> > character languages or can someone give me some advice on how to go
> > about writing a parser that can handle all these fairly different
> > languages?
>
> I would consider doing this in Java, as Java's regular expressions
> support Unicode.  That might make the job much easier.  OTOH, if all
> files use only dot, question mark etc. (i.e. ASCII chars) as sentence
> delimiters then Ruby's regular expressions might as well do the job.

Ruby supports UTF-8 regular expressions: for example, /\w+|\W/u can be
used
to scan a string splitting it into words and non-words. There were
some bugs
with Unicode character classifications in older versions of Ruby, but
I'm not
aware of any in 1.8.6; OTOH I've never tried it with non-latin text so
I don't
know if it works correctly in those cases too.


0
7/30/2007 9:57:44 AM
On Jul 30, 2007, at 3:50 AM, mike b. wrote:

> I have to parse about 2000 files that are written in multiple
> languages (some English, some Korean, some Arabic and some Japanese).
> I have to split these UTF-8 encoded into individual sentences.

As has been stated, Ruby's regular expression engine has a Unicode  
mode and that may be all you need here, depending on how you  
recognize sentence boundaries.

> Has anyone written a good parser that can parse all these non-Latin
> character languages or can someone give me some advice on how to go
> about writing a parser that can handle all these fairly different
> languages?

I've released an initial version of my Ghost Wheel parser generator  
library.  It doesn't have documentation yet, but it was built using  
TDD and you should be able to look over the tests to see how it  
works.  I'm also happy to answer questions.

My hope is that it works fine for non-Latin languages, but I'll  
confess that I haven't tested it that way yet.  I would try to fix  
any issues you uncovered though.

James Edward Gray II

0
james8284 (4405)
7/30/2007 12:52:14 PM
Reply: