Chilkat Examples

ChilkatHOMEAndroid™Classic ASPCC++C#C# UWP/WinRTDataFlexDelphi ActiveXDelphi DLLVisual FoxProJavaLianjaMFCObjective-CPerlPHP ActiveXPHP ExtensionPowerBuilderPowerShellPureBasicPythonRubySQL ServerSwiftTclUnicode CUnicode C++Visual Basic 6.0VB.NETVB.NET UWP/WinRTVBScriptNode.js

Python Examples

Async
Certificates
ECC
Email Object
Encryption
FTP
HTML-to-XML/Text
Gzip
HTTP
IMAP
MHT / HTML Email
PEM
PFX/P12
Java KeyStore (JKS)
POP3
RSA Encryption
MIME
SCP
SMTP
Socket/SSL/TLS
SSH Key
SSH
SFTP
Tar Archive
XML
XMP
Zip

More Examples...
ASN.1
PRNG
Amazon S3
Bounced Email
CSV
Diffie-Hellman
DKIM / DomainKey
NTLM
QuickBooks

 

 

 

 

 

 

 

(Python) A Simple Web Crawler

This demonstrates a very simple web crawler using the Chilkat Spider component.

Chilkat Python Downloads

Python Module for Windows, Linux, MAC OS X,
Solaris, FreeBSD, and ARM Embedded Linux

import chilkat

#  The Chilkat Spider component/library is free.
spider = chilkat.CkSpider()

seenDomains = chilkat.CkStringArray()
seedUrls = chilkat.CkStringArray()

seenDomains.put_Unique(True)
seedUrls.put_Unique(True)

#  You will need to change the start URL to something else...
seedUrls.Append("http://something.whateverYouWant.com/")

#  Set outbound URL exclude patterns
#  URLs matching any of these patterns will not be added to the
#  collection of outbound links.
spider.AddAvoidOutboundLinkPattern("*?id=*")
spider.AddAvoidOutboundLinkPattern("*.mypages.*")
spider.AddAvoidOutboundLinkPattern("*.personal.*")
spider.AddAvoidOutboundLinkPattern("*.comcast.*")
spider.AddAvoidOutboundLinkPattern("*.aol.*")
spider.AddAvoidOutboundLinkPattern("*~*")

#  Use a cache so we don't have to re-fetch URLs previously fetched.
spider.put_CacheDir("c:/spiderCache/")
spider.put_FetchFromCache(True)
spider.put_UpdateCache(True)

while seedUrls.get_Count() > 0 :

    url = seedUrls.pop()
    spider.Initialize(url)

    #  Spider 5 URLs of this domain.
    #  but first, save the base domain in seenDomains
    domain = spider.getUrlDomain(url)
    seenDomains.Append(spider.getBaseDomain(domain))

    for i in range(0,5):
        success = spider.CrawlNext()
        if (success != True):
            break

        #  Display the URL we just crawled.
        print spider.lastUrl()

        #  If the last URL was retrieved from cache,
        #  we won't wait.  Otherwise we'll wait 1 second
        #  before fetching the next URL.
        if (spider.get_LastFromCache() != True):
            spider.SleepMs(1000)

    #  Add the outbound links to seedUrls, except
    #  for the domains we've already seen.
    for i in range(0,spider.get_NumOutboundLinks()):

        url = spider.getOutboundLink(i)
        domain = spider.getUrlDomain(url)
        baseDomain = spider.getBaseDomain(domain)
        if (not seenDomains.Contains(baseDomain)):
            seedUrls.Append(url)

        #  Don't let our list of seedUrls grow too large.
        if (seedUrls.get_Count() > 1000):
            break


 

© 2000-2014 Chilkat Software, Inc. All Rights Reserved.