Hot questions for Using PDFBox in unicode

Question:

I've been using the PDFBOX version 2.0.0 in a Java project to convert pdfs to text.

several of my pdfs are missing the ToUnicode method, so they come out in Gibberish while I export them.

2016-09-14 10:44:55 WARN org.apache.pdfbox.pdmodel.font.PDSimpleFont(1):322 - No Unicode mapping for 694 (30) in font MPBAAA+F1

in the WARN above, instead of the real character, a gibberish unicode (30) was presented.

I was able to overcome it by editing the additional.txt file in pdfbox, since from trial & error I understood that the code of the character (694 in this case) represents a certain Hebrew letter (צ).

here's a short example of what I've edited inside the file:

-694;05E6 #HexaDecimal value for the letter צ
-695;05E7
-696;05E8

later I've encountered almost the same warning on a different pdf, but instead of gibberish characters I got no characters at all. a more detailed explination of this issue can be seen here - pdf reading via pdfbox in java

2016-09-14 11:07:10 WARN org.apache.pdfbox.pdmodel.font.PDType0Font(1):431 - No Unicode mapping for CID+694 (694) in font ABCDEE+Tahoma,Bold

As you can see, the warning came from a different class (PDType0Font) rather than the first warning (PDSimpleFont), but the code name (694) is the same in both of them and they are both talking about the same character.

Does there's a different file that I should edit other than additional.txt to point the 694 code (the Hebrew letter צ) to it's correct unicode?

Thanks


Answer:

Here's some code to add a ToUnicode CMap stream in a font. Obviously I can't do it with your file, so I used one of my test files, which can be found here. I had to work on each entry separately and didn't do all. However the result is good enough to extract the first word in the green print ("Bedingungen").

The scenario is somewhat tailored to you:

  • Identity-H entry
  • no ToUnicode entry
  • specific font name

    try (PDDocument doc = PDDocument.load(f))
    {
        for (int p = 0; p < doc.getNumberOfPages(); ++p)
        {
            PDPage page = doc.getPage(p);
            PDResources res = page.getResources();
            for (COSName fontName : res.getFontNames())
            {
                PDFont font = res.getFont(fontName);
                COSBase encoding = font.getCOSObject().getDictionaryObject(COSName.ENCODING);
                if (!COSName.IDENTITY_H.equals(encoding))
                {
                    continue;
                }
                // get real name
                String fname = font.getName();
                int plus = fname.indexOf('+');
                if (plus != -1)
                {
                    fname = fname.substring(plus + 1);
                }
                if (font.getCOSObject().containsKey(COSName.TO_UNICODE))
                {
                    continue;
                }
                System.out.println("File '" + f.getName() + "', page " + (p + 1) + ", " + fontName.getName() + ", " + font.getName());
                if (!fname.startsWith("Calibri-Bold"))
                {
                    continue;
                }
                COSStream toUnicodeStream = new COSStream();
                try (PrintWriter pw = new PrintWriter(toUnicodeStream.createOutputStream(COSName.FLATE_DECODE)))
                {
                    // "9.10 Extraction of Text Content" in the PDF 32000 specification
                    pw.println ("/CIDInit /ProcSet findresource begin\n" +
                            "12 dict begin\n" +
                            "begincmap\n" +
                            "/CIDSystemInfo\n" +
                            "<< /Registry (Adobe)\n" +
                            "/Ordering (UCS) /Supplement 0 >> def\n" +
                            "/CMapName /Adobe-Identity-UCS def\n" +
                            "/CMapType 2 def\n" +
                            "1 begincodespacerange\n" +
                            "<0000> <FFFF>\n" +
                            "endcodespacerange\n" +
                            "10 beginbfchar\n" + // number is count of entries
                            "<0001><0020>\n" + // space
                            "<0002><0041>\n" + // A
                            "<0003><0042>\n" + // B
                            "<0004><0044>\n" + // D
                            "<0013><0065>\n" + // e
                            "<0012><0064>\n" + // d
                            "<0017><0069>\n" + // i
                            "<001B><006E>\n" + // n
                            "<0015><0067>\n" + // g
                            "<0020><0075>\n" + // u
                            "endbfchar\n" +
                            "endcmap CMapName currentdict /CMap defineresource pop end end");
                }
                font.getCOSObject().setItem(COSName.TO_UNICODE, toUnicodeStream);
            }
        }
        doc.save("huhu.pdf");
    }
    

Btw the unreleased 2.1 version of PDFDebugger has some improved features to show fonts, you can get it here:

You can use it to verify that your ToUnicode CMap makes sense. Here's what I get with my changes:

Question:

I am writing a Java function which takes a String as a parameter and produce a PDF as an output with PDFBox.

Everything is working fine as long as I use latin characters. However, I don't know in advance what will be the input, and it might be some English as well as Chinese or Japanese characters.

In the case of non latin characters, here is the error I get:

Exception in thread "main" java.lang.IllegalArgumentException: U+3053 ('kohiragana') is not available in this font Helvetica encoding: WinAnsiEncoding
at org.apache.pdfbox.pdmodel.font.PDType1Font.encode(PDType1Font.java:426)
at org.apache.pdfbox.pdmodel.font.PDFont.encode(PDFont.java:324)
at org.apache.pdfbox.pdmodel.PDPageContentStream.showTextInternal(PDPageContentStream.java:509)
at org.apache.pdfbox.pdmodel.PDPageContentStream.showText(PDPageContentStream.java:471)
at com.mylib.pdf.PDFBuilder.generatePdfFromString(PDFBuilder.java:122)
at com.mylib.pdf.PDFBuilder.main(PDFBuilder.java:111)

If I understand correctly, I have to use a specific font for Japanese, another one for Chinese and so on, because the one that I am using (Helvetiva) doesn't handle all required unicode characters.

I could also use a font which handle all these unicode characters, such as Arial Unicode. However this font is under a specific license so I cannot use it and I haven't found another one.

I found some projects that want to overcome this issue, like the Google NOTO project. However, this project provides multiple font files. So I would have to choose, at runtime, the correct file to load depending on the input I have.

So I am facing 2 options, one of which I don't know how to implement properly:

  1. Keep searching for a font that handle almost every unicode character (where is this grail I am desperately seeking?!)

  2. Try to detect which language is used and select a font depending on it. Despite the fact that I don't know (yet) how to do that, I don't find it to be a clean implementation, as the mapping between the input and the font file will be hardcoded, meaning I will have to hardcode all the possible mappings.

  3. Is there another solution?

  4. Am I completely off tracks?

Thanks in advance for your help and guidance!

Here is the code I use to generate the PDF:

public static void main(String args[]) throws IOException {
    String latinText = "This is latin text";
    String japaneseText = "これは日本語です";

    // This works good
    generatePdfFromString(latinText);

    // This generate an error
    generatePdfFromString(japaneseText);
}

private static OutputStream generatePdfFromString(String content) throws IOException {
    PDPage page = new PDPage();

    try (PDDocument doc = new PDDocument();
         PDPageContentStream contentStream = new PDPageContentStream(doc, page)) {
        doc.addPage(page);
        contentStream.setFont(PDType1Font.HELVETICA, 12);

        // Or load a specific font from a file
        // contentStream.setFont(PDType0Font.load(this.doc, new File("/fontPath.ttf")), 12);

        contentStream.beginText();
        contentStream.showText(content);
        contentStream.endText();
        contentStream.close();
        OutputStream os = new ByteArrayOutputStream();
        doc.save(os);
        return os;
    }
}

Answer:

A better solution than waiting for a font or guessing a text's language is to have a multitude of fonts and selecting the correct font on a glyph-by-glyph base.

You already found the Google Noto Fonts which are a good base collection of fonts for this task.

Unfortunately, though, Google publishes the Noto CJK fonts only as OpenType fonts (.otf), not as TrueType fonts (.ttf), a policy that isn't likely to change, cf. the Noto fonts issue 249 and others. On the other hand PDFBox does not support OpenType fonts and isn't actively working on OpenType support either, cf. PDFBOX-2482.

Thus, one has to convert the OpenType font somehow to TrueType. I simply took the file shared by djmilch in his blog post FREE FONT NOTO SANS CJK IN TTF.

Font selection per character

So you essentially need a method which checks your text character by character and dissects it into chunks which can be drawn using the same font.

Unfortunately I don't see a better method to ask a PDFBox PDFont whether it knows a glyph for a given character than to actually try to encode the character and consider a IllegalArgumentException a "no".

I, therefore, implemented that functionality using the following helper class TextWithFont and method fontify:

class TextWithFont {
    final String text;
    final PDFont font;

    TextWithFont(String text, PDFont font) {
        this.text = text;
        this.font = font;
    }

    public void show(PDPageContentStream canvas, float fontSize) throws IOException {
        canvas.setFont(font, fontSize);
        canvas.showText(text);
    }
}

(AddTextWithDynamicFonts inner class)

List<TextWithFont> fontify(List<PDFont> fonts, String text) throws IOException {
    List<TextWithFont> result = new ArrayList<>();
    if (text.length() > 0) {
        PDFont currentFont = null;
        int start = 0;
        for (int i = 0; i < text.length(); ) {
            int codePoint = text.codePointAt(i);
            int codeChars = Character.charCount(codePoint);
            String codePointString = text.substring(i, i + codeChars);
            boolean canEncode = false;
            for (PDFont font : fonts) {
                try {
                    font.encode(codePointString);
                    canEncode = true;
                    if (font != currentFont) {
                        if (currentFont != null) {
                            result.add(new TextWithFont(text.substring(start, i), currentFont));
                        }
                        currentFont = font;
                        start = i;
                    }
                    break;
                } catch (Exception ioe) {
                    // font cannot encode codepoint
                }
            }
            if (!canEncode) {
                throw new IOException("Cannot encode '" + codePointString + "'.");
            }
            i += codeChars;
        }
        result.add(new TextWithFont(text.substring(start, text.length()), currentFont));
    }
    return result;
}

(AddTextWithDynamicFonts method)

Example use

Using the method and the class above like this

String latinText = "This is latin text";
String japaneseText = "これは日本語です";
String mixedText = "Tこhれiはs日 本i語sで すlatin text";

generatePdfFromStringImproved(latinText).writeTo(new FileOutputStream("Cccompany-Latin-Improved.pdf"));
generatePdfFromStringImproved(japaneseText).writeTo(new FileOutputStream("Cccompany-Japanese-Improved.pdf"));
generatePdfFromStringImproved(mixedText).writeTo(new FileOutputStream("Cccompany-Mixed-Improved.pdf"));

(AddTextWithDynamicFonts test testAddLikeCccompanyImproved)

ByteArrayOutputStream generatePdfFromStringImproved(String content) throws IOException {
    try (   PDDocument doc = new PDDocument();
            InputStream notoSansRegularResource = AddTextWithDynamicFonts.class.getResourceAsStream("NotoSans-Regular.ttf");
            InputStream notoSansCjkRegularResource = AddTextWithDynamicFonts.class.getResourceAsStream("NotoSansCJKtc-Regular.ttf")   ) {
        PDType0Font notoSansRegular = PDType0Font.load(doc, notoSansRegularResource);
        PDType0Font notoSansCjkRegular = PDType0Font.load(doc, notoSansCjkRegularResource);
        List<PDFont> fonts = Arrays.asList(notoSansRegular, notoSansCjkRegular);

        List<TextWithFont> fontifiedContent = fontify(fonts, content);

        PDPage page = new PDPage();
        doc.addPage(page);
        try (   PDPageContentStream contentStream = new PDPageContentStream(doc, page)) {
            contentStream.beginText();
            for (TextWithFont textWithFont : fontifiedContent) {
                textWithFont.show(contentStream, 12);
            }
            contentStream.endText();
        }
        ByteArrayOutputStream os = new ByteArrayOutputStream();
        doc.save(os);
        return os;
    }
}

(AddTextWithDynamicFonts helper method)

I get

  • for latinText = "This is latin text"

  • for japaneseText = "これは日本語です"

  • and for mixedText = "Tこhれiはs日 本i語sで すlatin text"

Some asides
  • I retrieved the fonts as Java resources but you can use any kind of InputStream for them.

  • The font selection mechanism above can quite easily be combined with the line breaking mechanism shown in this answer and the justification extension thereof in this answer

Question:

While extracting text from some PDFs PDFBox returns gibberish. This is because of a missing or corrupt Unicode mapping. I can see following warnings on the console. I want to be able to detect this to be able to flag these PDFs as corrupt.

I'm looking for a solution that is better than parsing logs.

Thanks for your help!

Sample Console Logs:

WARNING: No Unicode mapping for CID+32 (32) in font F6
WARNING: Failed to find a character mapping for 32 in TimesNewRoman,Bold

Below mentioned post also talks about the same issue but doesn't talk about ways to be able to detect this on code side and handle the same: Issue with reading some unicode characters out of a PDF using PDFBox


Answer:

A fourth possibility (next to the three given in Aaron Digulla answer) is to override showGlyph() when extending the PDFTextStripper class:

protected void showGlyph(Matrix textRenderingMatrix, PDFont font, int code, String unicode, Vector displacement) throws IOException
{
    super.showGlyph(textRenderingMatrix, font, code, unicode, displacement);
    if (unicode == null || unicode.isEmpty())
    {
        // do stuff
    }
}

Question:

I'm using Apache's PDFBox library to write a PdfDocumentBuilder class. I'm using currentFont.hasGlyph(character) to check if a character has a glyph before attempting to write it to the file. The problem is that when the character is a unicode control character like '\u001f', hasGlyph() returns true, causing an exception to be thrown by encode() when writing (see PdfDocumentBuilder code and stack trace below for reference).

I did some research and it appears these unicode control characters are not supported for the font I'm using (Courier Prime).

So why does hasGlyph() return true for unicode control characters when they are not supported? Of course I could strip the control characters from the line with a simple replaceAll before I enter the writeTextWithSymbol() method, but if the hasGlyph() method isn't working as I expect it to, I have a bigger problem.

PdfDocumentBuilder:

private final PDType0Font baseFont;
private PDType0Font currentFont;   

public PdfDocumentBuilder () {
    baseFont = PDType0Font.load(doc, this.getClass().getResourceAsStream("/CourierPrime.ttf"));
    currentFont = baseFont;
}

private void writeTextWithSymbol (String text) throws IOException {
    StringBuilder nonSymbolBuffer = new StringBuilder();
    for (char character : text.toCharArray()) {
        if (currentFont.hasGlyph(character)) {
            nonSymbolBuffer.append(character);
        } else {
            //handling writing line with symbols...
        }
    }
    if (nonSymbolBuffer.length() > 0) {
        content.showText(nonSymbolBuffer.toString());
    }
}

Stack trace:

java.lang.IllegalArgumentException: No glyph for U+001F in font CourierPrime
at org.apache.pdfbox.pdmodel.font.PDCIDFontType2.encode(PDCIDFontType2.java:400)
at org.apache.pdfbox.pdmodel.font.PDType0Font.encode(PDType0Font.java:351)
at org.apache.pdfbox.pdmodel.font.PDFont.encode(PDFont.java:316)
at org.apache.pdfbox.pdmodel.PDPageContentStream.showText(PDPageContentStream.java:414)
at org.main.export.PdfDocumentBuilder.writeTextWithSymbol(PdfDocumentBuilder.java:193)

Answer:

As explained in the comments above, hasGlyph() is not meant to accept unicode characters as a parameter. So if you need to check whether a character can be encoded before writing it, you can do something like this:

private void writeTextWithSymbol (String text) throws IOException {
    StringBuilder nonSymbolBuffer = new StringBuilder();
    for (char character : text.toCharArray()) {
        if (isCharacterEncodeable(character)) {
            nonSymbolBuffer.append(character);
        } else {
            //handle writing line with symbols...
        }
    }
    if (nonSymbolBuffer.length() > 0) {
        content.showText(nonSymbolBuffer.toString());
    }
}

private boolean isCharacterEncodeable (char character) throws IOException {
    try {
        currentFont.encode(Character.toString(character));
        return true;
    } catch (IllegalArgumentException iae) {
        LOGGER.trace("Character cannot be encoded", iae);
        return false;
    }
}