commit 3bb2fdfad608c09d8653d77c866eb043684cfdc8 Author: Nico Melone Date: Tue Jan 28 14:59:07 2020 -0600 Initial commit diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..dfe0770 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,2 @@ +# Auto detect text files and perform LF normalization +* text=auto diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..e62ec04 --- /dev/null +++ b/LICENSE @@ -0,0 +1,674 @@ +GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/README.md b/README.md new file mode 100644 index 0000000..706dc63 --- /dev/null +++ b/README.md @@ -0,0 +1,2 @@ +# AWS-Device + The files for using AWS IoT and collecting data from various data generators diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/MQTTLib.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/MQTTLib.py new file mode 100644 index 0000000..2a2527a --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/MQTTLib.py @@ -0,0 +1,1779 @@ +# +#/* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +from AWSIoTPythonSDK.core.util.providers import CertificateCredentialsProvider +from AWSIoTPythonSDK.core.util.providers import IAMCredentialsProvider +from AWSIoTPythonSDK.core.util.providers import EndpointProvider +from AWSIoTPythonSDK.core.jobs.thingJobManager import jobExecutionTopicType +from AWSIoTPythonSDK.core.jobs.thingJobManager import jobExecutionTopicReplyType +from AWSIoTPythonSDK.core.protocol.mqtt_core import MqttCore +import AWSIoTPythonSDK.core.shadow.shadowManager as shadowManager +import AWSIoTPythonSDK.core.shadow.deviceShadow as deviceShadow +import AWSIoTPythonSDK.core.jobs.thingJobManager as thingJobManager + +# Constants +# - Protocol types: +MQTTv3_1 = 3 +MQTTv3_1_1 = 4 + +DROP_OLDEST = 0 +DROP_NEWEST = 1 + +class AWSIoTMQTTClient: + + def __init__(self, clientID, protocolType=MQTTv3_1_1, useWebsocket=False, cleanSession=True): + """ + + The client class that connects to and accesses AWS IoT over MQTT v3.1/3.1.1. + + The following connection types are available: + + - TLSv1.2 Mutual Authentication + + X.509 certificate-based secured MQTT connection to AWS IoT + + - Websocket SigV4 + + IAM credential-based secured MQTT connection over Websocket to AWS IoT + + It provides basic synchronous MQTT operations in the classic MQTT publish-subscribe + model, along with configurations of on-top features: + + - Auto reconnect/resubscribe + + - Progressive reconnect backoff + + - Offline publish requests queueing with draining + + **Syntax** + + .. code:: python + + import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT + + # Create an AWS IoT MQTT Client using TLSv1.2 Mutual Authentication + myAWSIoTMQTTClient = AWSIoTPyMQTT.AWSIoTMQTTClient("testIoTPySDK") + # Create an AWS IoT MQTT Client using Websocket SigV4 + myAWSIoTMQTTClient = AWSIoTPyMQTT.AWSIoTMQTTClient("testIoTPySDK", useWebsocket=True) + + **Parameters** + + *clientID* - String that denotes the client identifier used to connect to AWS IoT. + If empty string were provided, client id for this connection will be randomly generated + n server side. + + *protocolType* - MQTT version in use for this connection. Could be :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1` or :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1_1` + + *useWebsocket* - Boolean that denotes enabling MQTT over Websocket SigV4 or not. + + **Returns** + + :code:`AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTClient` object + + """ + self._mqtt_core = MqttCore(clientID, cleanSession, protocolType, useWebsocket) + + # Configuration APIs + def configureLastWill(self, topic, payload, QoS, retain=False): + """ + **Description** + + Used to configure the last will topic, payload and QoS of the client. Should be called before connect. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.configureLastWill("last/Will/Topic", "lastWillPayload", 0) + + **Parameters** + + *topic* - Topic name that last will publishes to. + + *payload* - Payload to publish for last will. + + *QoS* - Quality of Service. Could be 0 or 1. + + **Returns** + + None + + """ + self._mqtt_core.configure_last_will(topic, payload, QoS, retain) + + def clearLastWill(self): + """ + **Description** + + Used to clear the last will configuration that is previously set through configureLastWill. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.clearLastWill() + + **Parameter** + + None + + **Returns** + + None + + """ + self._mqtt_core.clear_last_will() + + def configureEndpoint(self, hostName, portNumber): + """ + **Description** + + Used to configure the host name and port number the client tries to connect to. Should be called + before connect. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.configureEndpoint("random.iot.region.amazonaws.com", 8883) + + **Parameters** + + *hostName* - String that denotes the host name of the user-specific AWS IoT endpoint. + + *portNumber* - Integer that denotes the port number to connect to. Could be :code:`8883` for + TLSv1.2 Mutual Authentication or :code:`443` for Websocket SigV4 and TLSv1.2 Mutual Authentication + with ALPN extension. + + **Returns** + + None + + """ + endpoint_provider = EndpointProvider() + endpoint_provider.set_host(hostName) + endpoint_provider.set_port(portNumber) + self._mqtt_core.configure_endpoint(endpoint_provider) + if portNumber == 443 and not self._mqtt_core.use_wss(): + self._mqtt_core.configure_alpn_protocols() + + def configureIAMCredentials(self, AWSAccessKeyID, AWSSecretAccessKey, AWSSessionToken=""): + """ + **Description** + + Used to configure/update the custom IAM credentials for Websocket SigV4 connection to + AWS IoT. Should be called before connect. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.configureIAMCredentials(obtainedAccessKeyID, obtainedSecretAccessKey, obtainedSessionToken) + + .. note:: + + Hard-coding credentials into custom script is NOT recommended. Please use AWS Cognito identity service + or other credential provider. + + **Parameters** + + *AWSAccessKeyID* - AWS Access Key Id from user-specific IAM credentials. + + *AWSSecretAccessKey* - AWS Secret Access Key from user-specific IAM credentials. + + *AWSSessionToken* - AWS Session Token for temporary authentication from STS. + + **Returns** + + None + + """ + iam_credentials_provider = IAMCredentialsProvider() + iam_credentials_provider.set_access_key_id(AWSAccessKeyID) + iam_credentials_provider.set_secret_access_key(AWSSecretAccessKey) + iam_credentials_provider.set_session_token(AWSSessionToken) + self._mqtt_core.configure_iam_credentials(iam_credentials_provider) + + def configureCredentials(self, CAFilePath, KeyPath="", CertificatePath=""): # Should be good for MutualAuth certs config and Websocket rootCA config + """ + **Description** + + Used to configure the rootCA, private key and certificate files. Should be called before connect. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.configureCredentials("PATH/TO/ROOT_CA", "PATH/TO/PRIVATE_KEY", "PATH/TO/CERTIFICATE") + + **Parameters** + + *CAFilePath* - Path to read the root CA file. Required for all connection types. + + *KeyPath* - Path to read the private key. Required for X.509 certificate based connection. + + *CertificatePath* - Path to read the certificate. Required for X.509 certificate based connection. + + **Returns** + + None + + """ + cert_credentials_provider = CertificateCredentialsProvider() + cert_credentials_provider.set_ca_path(CAFilePath) + cert_credentials_provider.set_key_path(KeyPath) + cert_credentials_provider.set_cert_path(CertificatePath) + self._mqtt_core.configure_cert_credentials(cert_credentials_provider) + + def configureAutoReconnectBackoffTime(self, baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond): + """ + **Description** + + Used to configure the auto-reconnect backoff timing. Should be called before connect. + + **Syntax** + + .. code:: python + + # Configure the auto-reconnect backoff to start with 1 second and use 128 seconds as a maximum back off time. + # Connection over 20 seconds is considered stable and will reset the back off time back to its base. + myAWSIoTMQTTClient.configureAutoReconnectBackoffTime(1, 128, 20) + + **Parameters** + + *baseReconnectQuietTimeSecond* - The initial back off time to start with, in seconds. + Should be less than the stableConnectionTime. + + *maxReconnectQuietTimeSecond* - The maximum back off time, in seconds. + + *stableConnectionTimeSecond* - The number of seconds for a connection to last to be considered as stable. + Back off time will be reset to base once the connection is stable. + + **Returns** + + None + + """ + self._mqtt_core.configure_reconnect_back_off(baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond) + + def configureOfflinePublishQueueing(self, queueSize, dropBehavior=DROP_NEWEST): + """ + **Description** + + Used to configure the queue size and drop behavior for the offline requests queueing. Should be + called before connect. Queueable offline requests include publish, subscribe and unsubscribe. + + **Syntax** + + .. code:: python + + import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT + + # Configure the offline queue for publish requests to be 20 in size and drop the oldest + request when the queue is full. + myAWSIoTMQTTClient.configureOfflinePublishQueueing(20, AWSIoTPyMQTT.DROP_OLDEST) + + **Parameters** + + *queueSize* - Size of the queue for offline publish requests queueing. + If set to 0, the queue is disabled. If set to -1, the queue size is set to be infinite. + + *dropBehavior* - the type of drop behavior when the queue is full. + Could be :code:`AWSIoTPythonSDK.core.util.enums.DropBehaviorTypes.DROP_OLDEST` or + :code:`AWSIoTPythonSDK.core.util.enums.DropBehaviorTypes.DROP_NEWEST`. + + **Returns** + + None + + """ + self._mqtt_core.configure_offline_requests_queue(queueSize, dropBehavior) + + def configureDrainingFrequency(self, frequencyInHz): + """ + **Description** + + Used to configure the draining speed to clear up the queued requests when the connection is back. + Should be called before connect. + + **Syntax** + + .. code:: python + + # Configure the draining speed to be 2 requests/second + myAWSIoTMQTTClient.configureDrainingFrequency(2) + + .. note:: + + Make sure the draining speed is fast enough and faster than the publish rate. Slow draining + could result in inifinite draining process. + + **Parameters** + + *frequencyInHz* - The draining speed to clear the queued requests, in requests/second. + + **Returns** + + None + + """ + self._mqtt_core.configure_draining_interval_sec(1/float(frequencyInHz)) + + def configureConnectDisconnectTimeout(self, timeoutSecond): + """ + **Description** + + Used to configure the time in seconds to wait for a CONNACK or a disconnect to complete. + Should be called before connect. + + **Syntax** + + .. code:: python + + # Configure connect/disconnect timeout to be 10 seconds + myAWSIoTMQTTClient.configureConnectDisconnectTimeout(10) + + **Parameters** + + *timeoutSecond* - Time in seconds to wait for a CONNACK or a disconnect to complete. + + **Returns** + + None + + """ + self._mqtt_core.configure_connect_disconnect_timeout_sec(timeoutSecond) + + def configureMQTTOperationTimeout(self, timeoutSecond): + """ + **Description** + + Used to configure the timeout in seconds for MQTT QoS 1 publish, subscribe and unsubscribe. + Should be called before connect. + + **Syntax** + + .. code:: python + + # Configure MQTT operation timeout to be 5 seconds + myAWSIoTMQTTClient.configureMQTTOperationTimeout(5) + + **Parameters** + + *timeoutSecond* - Time in seconds to wait for a PUBACK/SUBACK/UNSUBACK. + + **Returns** + + None + + """ + self._mqtt_core.configure_operation_timeout_sec(timeoutSecond) + + def configureUsernamePassword(self, username, password=None): + """ + **Description** + + Used to configure the username and password used in CONNECT packet. + + **Syntax** + + .. code:: python + + # Configure user name and password + myAWSIoTMQTTClient.configureUsernamePassword("myUsername", "myPassword") + + **Parameters** + + *username* - Username used in the username field of CONNECT packet. + + *password* - Password used in the password field of CONNECT packet. + + **Returns** + + None + + """ + self._mqtt_core.configure_username_password(username, password) + + def configureSocketFactory(self, socket_factory): + """ + **Description** + + Configure a socket factory to custom configure a different socket type for + mqtt connection. Creating a custom socket allows for configuration of a proxy + + **Syntax** + + .. code:: python + + # Configure socket factory + custom_args = {"arg1": "val1", "arg2": "val2"} + socket_factory = lambda: custom.create_connection((host, port), **custom_args) + myAWSIoTMQTTClient.configureSocketFactory(socket_factory) + + **Parameters** + + *socket_factory* - Anonymous function which creates a custom socket to spec. + + **Returns** + + None + + """ + self._mqtt_core.configure_socket_factory(socket_factory) + + def enableMetricsCollection(self): + """ + **Description** + + Used to enable SDK metrics collection. Username field in CONNECT packet will be used to append the SDK name + and SDK version in use and communicate to AWS IoT cloud. This metrics collection is enabled by default. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.enableMetricsCollection() + + **Parameters** + + None + + **Returns** + + None + + """ + self._mqtt_core.enable_metrics_collection() + + def disableMetricsCollection(self): + """ + **Description** + + Used to disable SDK metrics collection. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.disableMetricsCollection() + + **Parameters** + + None + + **Returns** + + None + + """ + self._mqtt_core.disable_metrics_collection() + + # MQTT functionality APIs + def connect(self, keepAliveIntervalSecond=600): + """ + **Description** + + Connect to AWS IoT, with user-specific keepalive interval configuration. + + **Syntax** + + .. code:: python + + # Connect to AWS IoT with default keepalive set to 600 seconds + myAWSIoTMQTTClient.connect() + # Connect to AWS IoT with keepalive interval set to 1200 seconds + myAWSIoTMQTTClient.connect(1200) + + **Parameters** + + *keepAliveIntervalSecond* - Time in seconds for interval of sending MQTT ping request. + A shorter keep-alive interval allows the client to detect disconnects more quickly. + Default set to 600 seconds. + + **Returns** + + True if the connect attempt succeeded. False if failed. + + """ + self._load_callbacks() + return self._mqtt_core.connect(keepAliveIntervalSecond) + + def connectAsync(self, keepAliveIntervalSecond=600, ackCallback=None): + """ + **Description** + + Connect asynchronously to AWS IoT, with user-specific keepalive interval configuration and CONNACK callback. + + **Syntax** + + .. code:: python + + # Connect to AWS IoT with default keepalive set to 600 seconds and a custom CONNACK callback + myAWSIoTMQTTClient.connectAsync(ackCallback=my_connack_callback) + # Connect to AWS IoT with default keepalive set to 1200 seconds and a custom CONNACK callback + myAWSIoTMQTTClient.connectAsync(keepAliveInternvalSecond=1200, ackCallback=myConnackCallback) + + **Parameters** + + *keepAliveIntervalSecond* - Time in seconds for interval of sending MQTT ping request. + Default set to 600 seconds. + + *ackCallback* - Callback to be invoked when the client receives a CONNACK. Should be in form + :code:`customCallback(mid, data)`, where :code:`mid` is the packet id for the connect request + and :code:`data` is the connect result code. + + **Returns** + + Connect request packet id, for tracking purpose in the corresponding callback. + + """ + self._load_callbacks() + return self._mqtt_core.connect_async(keepAliveIntervalSecond, ackCallback) + + def _load_callbacks(self): + self._mqtt_core.on_online = self.onOnline + self._mqtt_core.on_offline = self.onOffline + self._mqtt_core.on_message = self.onMessage + + def disconnect(self): + """ + **Description** + + Disconnect from AWS IoT. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.disconnect() + + **Parameters** + + None + + **Returns** + + True if the disconnect attempt succeeded. False if failed. + + """ + return self._mqtt_core.disconnect() + + def disconnectAsync(self, ackCallback=None): + """ + **Description** + + Disconnect asynchronously to AWS IoT. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.disconnectAsync(ackCallback=myDisconnectCallback) + + **Parameters** + + *ackCallback* - Callback to be invoked when the client finishes sending disconnect and internal clean-up. + Should be in form :code:`customCallback(mid, data)`, where :code:`mid` is the packet id for the disconnect + request and :code:`data` is the disconnect result code. + + **Returns** + + Disconnect request packet id, for tracking purpose in the corresponding callback. + + """ + return self._mqtt_core.disconnect_async(ackCallback) + + def publish(self, topic, payload, QoS): + """ + **Description** + + Publish a new message to the desired topic with QoS. + + **Syntax** + + .. code:: python + + # Publish a QoS0 message "myPayload" to topic "myTopic" + myAWSIoTMQTTClient.publish("myTopic", "myPayload", 0) + # Publish a QoS1 message "myPayloadWithQos1" to topic "myTopic/sub" + myAWSIoTMQTTClient.publish("myTopic/sub", "myPayloadWithQos1", 1) + + **Parameters** + + *topic* - Topic name to publish to. + + *payload* - Payload to publish. + + *QoS* - Quality of Service. Could be 0 or 1. + + **Returns** + + True if the publish request has been sent to paho. False if the request did not reach paho. + + """ + return self._mqtt_core.publish(topic, payload, QoS, False) # Disable retain for publish by now + + def publishAsync(self, topic, payload, QoS, ackCallback=None): + """ + **Description** + + Publish a new message asynchronously to the desired topic with QoS and PUBACK callback. Note that the ack + callback configuration for a QoS0 publish request will be ignored as there are no PUBACK reception. + + **Syntax** + + .. code:: python + + # Publish a QoS0 message "myPayload" to topic "myTopic" + myAWSIoTMQTTClient.publishAsync("myTopic", "myPayload", 0) + # Publish a QoS1 message "myPayloadWithQos1" to topic "myTopic/sub", with custom PUBACK callback + myAWSIoTMQTTClient.publishAsync("myTopic/sub", "myPayloadWithQos1", 1, ackCallback=myPubackCallback) + + **Parameters** + + *topic* - Topic name to publish to. + + *payload* - Payload to publish. + + *QoS* - Quality of Service. Could be 0 or 1. + + *ackCallback* - Callback to be invoked when the client receives a PUBACK. Should be in form + :code:`customCallback(mid)`, where :code:`mid` is the packet id for the disconnect request. + + **Returns** + + Publish request packet id, for tracking purpose in the corresponding callback. + + """ + return self._mqtt_core.publish_async(topic, payload, QoS, False, ackCallback) + + def subscribe(self, topic, QoS, callback): + """ + **Description** + + Subscribe to the desired topic and register a callback. + + **Syntax** + + .. code:: python + + # Subscribe to "myTopic" with QoS0 and register a callback + myAWSIoTMQTTClient.subscribe("myTopic", 0, customCallback) + # Subscribe to "myTopic/#" with QoS1 and register a callback + myAWSIoTMQTTClient.subscribe("myTopic/#", 1, customCallback) + + **Parameters** + + *topic* - Topic name or filter to subscribe to. + + *QoS* - Quality of Service. Could be 0 or 1. + + *callback* - Function to be called when a new message for the subscribed topic + comes in. Should be in form :code:`customCallback(client, userdata, message)`, where + :code:`message` contains :code:`topic` and :code:`payload`. Note that :code:`client` and :code:`userdata` are + here just to be aligned with the underneath Paho callback function signature. These fields are pending to be + deprecated and should not be depended on. + + **Returns** + + True if the subscribe attempt succeeded. False if failed. + + """ + return self._mqtt_core.subscribe(topic, QoS, callback) + + def subscribeAsync(self, topic, QoS, ackCallback=None, messageCallback=None): + """ + **Description** + + Subscribe to the desired topic and register a message callback with SUBACK callback. + + **Syntax** + + .. code:: python + + # Subscribe to "myTopic" with QoS0, custom SUBACK callback and a message callback + myAWSIoTMQTTClient.subscribe("myTopic", 0, ackCallback=mySubackCallback, messageCallback=customMessageCallback) + # Subscribe to "myTopic/#" with QoS1, custom SUBACK callback and a message callback + myAWSIoTMQTTClient.subscribe("myTopic/#", 1, ackCallback=mySubackCallback, messageCallback=customMessageCallback) + + **Parameters** + + *topic* - Topic name or filter to subscribe to. + + *QoS* - Quality of Service. Could be 0 or 1. + + *ackCallback* - Callback to be invoked when the client receives a SUBACK. Should be in form + :code:`customCallback(mid, data)`, where :code:`mid` is the packet id for the disconnect request and + :code:`data` is the granted QoS for this subscription. + + *messageCallback* - Function to be called when a new message for the subscribed topic + comes in. Should be in form :code:`customCallback(client, userdata, message)`, where + :code:`message` contains :code:`topic` and :code:`payload`. Note that :code:`client` and :code:`userdata` are + here just to be aligned with the underneath Paho callback function signature. These fields are pending to be + deprecated and should not be depended on. + + **Returns** + + Subscribe request packet id, for tracking purpose in the corresponding callback. + + """ + return self._mqtt_core.subscribe_async(topic, QoS, ackCallback, messageCallback) + + def unsubscribe(self, topic): + """ + **Description** + + Unsubscribe to the desired topic. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.unsubscribe("myTopic") + + **Parameters** + + *topic* - Topic name or filter to unsubscribe to. + + **Returns** + + True if the unsubscribe attempt succeeded. False if failed. + + """ + return self._mqtt_core.unsubscribe(topic) + + def unsubscribeAsync(self, topic, ackCallback=None): + """ + **Description** + + Unsubscribe to the desired topic with UNSUBACK callback. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.unsubscribe("myTopic", ackCallback=myUnsubackCallback) + + **Parameters** + + *topic* - Topic name or filter to unsubscribe to. + + *ackCallback* - Callback to be invoked when the client receives a UNSUBACK. Should be in form + :code:`customCallback(mid)`, where :code:`mid` is the packet id for the disconnect request. + + **Returns** + + Unsubscribe request packet id, for tracking purpose in the corresponding callback. + + """ + return self._mqtt_core.unsubscribe_async(topic, ackCallback) + + def onOnline(self): + """ + **Description** + + Callback that gets called when the client is online. The callback registration should happen before calling + connect/connectAsync. + + **Syntax** + + .. code:: python + + # Register an onOnline callback + myAWSIoTMQTTClient.onOnline = myOnOnlineCallback + + **Parameters** + + None + + **Returns** + + None + + """ + pass + + def onOffline(self): + """ + **Description** + + Callback that gets called when the client is offline. The callback registration should happen before calling + connect/connectAsync. + + **Syntax** + + .. code:: python + + # Register an onOffline callback + myAWSIoTMQTTClient.onOffline = myOnOfflineCallback + + **Parameters** + + None + + **Returns** + + None + + """ + pass + + def onMessage(self, message): + """ + **Description** + + Callback that gets called when the client receives a new message. The callback registration should happen before + calling connect/connectAsync. This callback, if present, will always be triggered regardless of whether there is + any message callback registered upon subscribe API call. It is for the purpose to aggregating the processing of + received messages in one function. + + **Syntax** + + .. code:: python + + # Register an onMessage callback + myAWSIoTMQTTClient.onMessage = myOnMessageCallback + + **Parameters** + + *message* - Received MQTT message. It contains the source topic as :code:`message.topic`, and the payload as + :code:`message.payload`. + + **Returns** + + None + + """ + pass + +class _AWSIoTMQTTDelegatingClient(object): + + def __init__(self, clientID, protocolType=MQTTv3_1_1, useWebsocket=False, cleanSession=True, awsIoTMQTTClient=None): + """ + + This class is used internally by the SDK and should not be instantiated directly. + + It delegates to a provided AWS IoT MQTT Client or creates a new one given the configuration + parameters and exposes core operations for subclasses provide convenience methods + + **Syntax** + + None + + **Parameters** + + *clientID* - String that denotes the client identifier used to connect to AWS IoT. + If empty string were provided, client id for this connection will be randomly generated + n server side. + + *protocolType* - MQTT version in use for this connection. Could be :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1` or :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1_1` + + *useWebsocket* - Boolean that denotes enabling MQTT over Websocket SigV4 or not. + + **Returns** + + AWSIoTPythonSDK.MQTTLib._AWSIoTMQTTDelegatingClient object + + """ + # AWSIOTMQTTClient instance + self._AWSIoTMQTTClient = awsIoTMQTTClient if awsIoTMQTTClient is not None else AWSIoTMQTTClient(clientID, protocolType, useWebsocket, cleanSession) + + # Configuration APIs + def configureLastWill(self, topic, payload, QoS): + """ + **Description** + + Used to configure the last will topic, payload and QoS of the client. Should be called before connect. This is a public + facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.configureLastWill("last/Will/Topic", "lastWillPayload", 0) + myJobsClient.configureLastWill("last/Will/Topic", "lastWillPayload", 0) + + **Parameters** + + *topic* - Topic name that last will publishes to. + + *payload* - Payload to publish for last will. + + *QoS* - Quality of Service. Could be 0 or 1. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureLastWill(srcTopic, srcPayload, srcQos) + self._AWSIoTMQTTClient.configureLastWill(topic, payload, QoS) + + def clearLastWill(self): + """ + **Description** + + Used to clear the last will configuration that is previously set through configureLastWill. This is a public + facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.clearLastWill() + myJobsClient.clearLastWill() + + **Parameter** + + None + + **Returns** + + None + + """ + # AWSIoTMQTTClient.clearLastWill() + self._AWSIoTMQTTClient.clearLastWill() + + def configureEndpoint(self, hostName, portNumber): + """ + **Description** + + Used to configure the host name and port number the underneath AWS IoT MQTT Client tries to connect to. Should be called + before connect. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.clearLastWill("random.iot.region.amazonaws.com", 8883) + myJobsClient.clearLastWill("random.iot.region.amazonaws.com", 8883) + + **Parameters** + + *hostName* - String that denotes the host name of the user-specific AWS IoT endpoint. + + *portNumber* - Integer that denotes the port number to connect to. Could be :code:`8883` for + TLSv1.2 Mutual Authentication or :code:`443` for Websocket SigV4 and TLSv1.2 Mutual Authentication + with ALPN extension. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureEndpoint + self._AWSIoTMQTTClient.configureEndpoint(hostName, portNumber) + + def configureIAMCredentials(self, AWSAccessKeyID, AWSSecretAccessKey, AWSSTSToken=""): + """ + **Description** + + Used to configure/update the custom IAM credentials for the underneath AWS IoT MQTT Client + for Websocket SigV4 connection to AWS IoT. Should be called before connect. This is a public + facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.clearLastWill(obtainedAccessKeyID, obtainedSecretAccessKey, obtainedSessionToken) + myJobsClient.clearLastWill(obtainedAccessKeyID, obtainedSecretAccessKey, obtainedSessionToken) + + .. note:: + + Hard-coding credentials into custom script is NOT recommended. Please use AWS Cognito identity service + or other credential provider. + + **Parameters** + + *AWSAccessKeyID* - AWS Access Key Id from user-specific IAM credentials. + + *AWSSecretAccessKey* - AWS Secret Access Key from user-specific IAM credentials. + + *AWSSessionToken* - AWS Session Token for temporary authentication from STS. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureIAMCredentials + self._AWSIoTMQTTClient.configureIAMCredentials(AWSAccessKeyID, AWSSecretAccessKey, AWSSTSToken) + + def configureCredentials(self, CAFilePath, KeyPath="", CertificatePath=""): # Should be good for MutualAuth and Websocket + """ + **Description** + + Used to configure the rootCA, private key and certificate files. Should be called before connect. This is a public + facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.clearLastWill("PATH/TO/ROOT_CA", "PATH/TO/PRIVATE_KEY", "PATH/TO/CERTIFICATE") + myJobsClient.clearLastWill("PATH/TO/ROOT_CA", "PATH/TO/PRIVATE_KEY", "PATH/TO/CERTIFICATE") + + **Parameters** + + *CAFilePath* - Path to read the root CA file. Required for all connection types. + + *KeyPath* - Path to read the private key. Required for X.509 certificate based connection. + + *CertificatePath* - Path to read the certificate. Required for X.509 certificate based connection. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureCredentials + self._AWSIoTMQTTClient.configureCredentials(CAFilePath, KeyPath, CertificatePath) + + def configureAutoReconnectBackoffTime(self, baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond): + """ + **Description** + + Used to configure the auto-reconnect backoff timing. Should be called before connect. This is a public + facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Configure the auto-reconnect backoff to start with 1 second and use 128 seconds as a maximum back off time. + # Connection over 20 seconds is considered stable and will reset the back off time back to its base. + myShadowClient.clearLastWill(1, 128, 20) + myJobsClient.clearLastWill(1, 128, 20) + + **Parameters** + + *baseReconnectQuietTimeSecond* - The initial back off time to start with, in seconds. + Should be less than the stableConnectionTime. + + *maxReconnectQuietTimeSecond* - The maximum back off time, in seconds. + + *stableConnectionTimeSecond* - The number of seconds for a connection to last to be considered as stable. + Back off time will be reset to base once the connection is stable. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureBackoffTime + self._AWSIoTMQTTClient.configureAutoReconnectBackoffTime(baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond) + + def configureConnectDisconnectTimeout(self, timeoutSecond): + """ + **Description** + + Used to configure the time in seconds to wait for a CONNACK or a disconnect to complete. + Should be called before connect. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Configure connect/disconnect timeout to be 10 seconds + myShadowClient.configureConnectDisconnectTimeout(10) + myJobsClient.configureConnectDisconnectTimeout(10) + + **Parameters** + + *timeoutSecond* - Time in seconds to wait for a CONNACK or a disconnect to complete. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureConnectDisconnectTimeout + self._AWSIoTMQTTClient.configureConnectDisconnectTimeout(timeoutSecond) + + def configureMQTTOperationTimeout(self, timeoutSecond): + """ + **Description** + + Used to configure the timeout in seconds for MQTT QoS 1 publish, subscribe and unsubscribe. + Should be called before connect. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Configure MQTT operation timeout to be 5 seconds + myShadowClient.configureMQTTOperationTimeout(5) + myJobsClient.configureMQTTOperationTimeout(5) + + **Parameters** + + *timeoutSecond* - Time in seconds to wait for a PUBACK/SUBACK/UNSUBACK. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureMQTTOperationTimeout + self._AWSIoTMQTTClient.configureMQTTOperationTimeout(timeoutSecond) + + def configureUsernamePassword(self, username, password=None): + """ + **Description** + + Used to configure the username and password used in CONNECT packet. This is a public facing API + inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Configure user name and password + myShadowClient.configureUsernamePassword("myUsername", "myPassword") + myJobsClient.configureUsernamePassword("myUsername", "myPassword") + + **Parameters** + + *username* - Username used in the username field of CONNECT packet. + + *password* - Password used in the password field of CONNECT packet. + + **Returns** + + None + + """ + self._AWSIoTMQTTClient.configureUsernamePassword(username, password) + + def configureSocketFactory(self, socket_factory): + """ + **Description** + + Configure a socket factory to custom configure a different socket type for + mqtt connection. Creating a custom socket allows for configuration of a proxy + + **Syntax** + + .. code:: python + + # Configure socket factory + custom_args = {"arg1": "val1", "arg2": "val2"} + socket_factory = lambda: custom.create_connection((host, port), **custom_args) + myAWSIoTMQTTClient.configureSocketFactory(socket_factory) + + **Parameters** + + *socket_factory* - Anonymous function which creates a custom socket to spec. + + **Returns** + + None + + """ + self._AWSIoTMQTTClient.configureSocketFactory(socket_factory) + + def enableMetricsCollection(self): + """ + **Description** + + Used to enable SDK metrics collection. Username field in CONNECT packet will be used to append the SDK name + and SDK version in use and communicate to AWS IoT cloud. This metrics collection is enabled by default. + This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.enableMetricsCollection() + myJobsClient.enableMetricsCollection() + + **Parameters** + + None + + **Returns** + + None + + """ + self._AWSIoTMQTTClient.enableMetricsCollection() + + def disableMetricsCollection(self): + """ + **Description** + + Used to disable SDK metrics collection. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.disableMetricsCollection() + myJobsClient.disableMetricsCollection() + + **Parameters** + + None + + **Returns** + + None + + """ + self._AWSIoTMQTTClient.disableMetricsCollection() + + # Start the MQTT connection + def connect(self, keepAliveIntervalSecond=600): + """ + **Description** + + Connect to AWS IoT, with user-specific keepalive interval configuration. This is a public facing API inherited + by application level public clients. + + **Syntax** + + .. code:: python + + # Connect to AWS IoT with default keepalive set to 600 seconds + myShadowClient.connect() + myJobsClient.connect() + # Connect to AWS IoT with keepalive interval set to 1200 seconds + myShadowClient.connect(1200) + myJobsClient.connect(1200) + + **Parameters** + + *keepAliveIntervalSecond* - Time in seconds for interval of sending MQTT ping request. + Default set to 30 seconds. + + **Returns** + + True if the connect attempt succeeded. False if failed. + + """ + self._load_callbacks() + return self._AWSIoTMQTTClient.connect(keepAliveIntervalSecond) + + def _load_callbacks(self): + self._AWSIoTMQTTClient.onOnline = self.onOnline + self._AWSIoTMQTTClient.onOffline = self.onOffline + + # End the MQTT connection + def disconnect(self): + """ + **Description** + + Disconnect from AWS IoT. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.disconnect() + myJobsClient.disconnect() + + **Parameters** + + None + + **Returns** + + True if the disconnect attempt succeeded. False if failed. + + """ + return self._AWSIoTMQTTClient.disconnect() + + # MQTT connection management API + def getMQTTConnection(self): + """ + **Description** + + Retrieve the AWS IoT MQTT Client used underneath, making it possible to perform + plain MQTT operations along with specialized operations using the same single connection. + This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Retrieve the AWS IoT MQTT Client used in the AWS IoT MQTT Delegating Client + thisAWSIoTMQTTClient = myShadowClient.getMQTTConnection() + thisAWSIoTMQTTClient = myJobsClient.getMQTTConnection() + # Perform plain MQTT operations using the same connection + thisAWSIoTMQTTClient.publish("Topic", "Payload", 1) + ... + + **Parameters** + + None + + **Returns** + + AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTClient object + + """ + # Return the internal AWSIoTMQTTClient instance + return self._AWSIoTMQTTClient + + def onOnline(self): + """ + **Description** + + Callback that gets called when the client is online. The callback registration should happen before calling + connect. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Register an onOnline callback + myShadowClient.onOnline = myOnOnlineCallback + myJobsClient.onOnline = myOnOnlineCallback + + **Parameters** + + None + + **Returns** + + None + + """ + pass + + def onOffline(self): + """ + **Description** + + Callback that gets called when the client is offline. The callback registration should happen before calling + connect. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Register an onOffline callback + myShadowClient.onOffline = myOnOfflineCallback + myJobsClient.onOffline = myOnOfflineCallback + + **Parameters** + + None + + **Returns** + + None + + """ + pass + + +class AWSIoTMQTTShadowClient(_AWSIoTMQTTDelegatingClient): + + def __init__(self, clientID, protocolType=MQTTv3_1_1, useWebsocket=False, cleanSession=True, awsIoTMQTTClient=None): + """ + + The client class that manages device shadow and accesses its functionality in AWS IoT over MQTT v3.1/3.1.1. + + It delegates to the AWS IoT MQTT Client and exposes devive shadow related operations. + It shares the same connection types, synchronous MQTT operations and partial on-top features + with the AWS IoT MQTT Client: + + - Auto reconnect/resubscribe + + Same as AWS IoT MQTT Client. + + - Progressive reconnect backoff + + Same as AWS IoT MQTT Client. + + - Offline publish requests queueing with draining + + Disabled by default. Queueing is not allowed for time-sensitive shadow requests/messages. + + **Syntax** + + .. code:: python + + import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT + + # Create an AWS IoT MQTT Shadow Client using TLSv1.2 Mutual Authentication + myAWSIoTMQTTShadowClient = AWSIoTPyMQTT.AWSIoTMQTTShadowClient("testIoTPySDK") + # Create an AWS IoT MQTT Shadow Client using Websocket SigV4 + myAWSIoTMQTTShadowClient = AWSIoTPyMQTT.AWSIoTMQTTShadowClient("testIoTPySDK", useWebsocket=True) + + **Parameters** + + *clientID* - String that denotes the client identifier used to connect to AWS IoT. + If empty string were provided, client id for this connection will be randomly generated + n server side. + + *protocolType* - MQTT version in use for this connection. Could be :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1` or :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1_1` + + *useWebsocket* - Boolean that denotes enabling MQTT over Websocket SigV4 or not. + + **Returns** + + AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTShadowClient object + + """ + super(AWSIoTMQTTShadowClient, self).__init__(clientID, protocolType, useWebsocket, cleanSession, awsIoTMQTTClient) + #leave passed in clients alone + if awsIoTMQTTClient is None: + # Configure it to disable offline Publish Queueing + self._AWSIoTMQTTClient.configureOfflinePublishQueueing(0) # Disable queueing, no queueing for time-sensitive shadow messages + self._AWSIoTMQTTClient.configureDrainingFrequency(10) + # Now retrieve the configured mqttCore and init a shadowManager instance + self._shadowManager = shadowManager.shadowManager(self._AWSIoTMQTTClient._mqtt_core) + + # Shadow management API + def createShadowHandlerWithName(self, shadowName, isPersistentSubscribe): + """ + **Description** + + Create a device shadow handler using the specified shadow name and isPersistentSubscribe. + + **Syntax** + + .. code:: python + + # Create a device shadow handler for shadow named "Bot1", using persistent subscription + Bot1Shadow = myAWSIoTMQTTShadowClient.createShadowHandlerWithName("Bot1", True) + # Create a device shadow handler for shadow named "Bot2", using non-persistent subscription + Bot2Shadow = myAWSIoTMQTTShadowClient.createShadowHandlerWithName("Bot2", False) + + **Parameters** + + *shadowName* - Name of the device shadow. + + *isPersistentSubscribe* - Whether to unsubscribe from shadow response (accepted/rejected) topics + when there is a response. Will subscribe at the first time the shadow request is made and will + not unsubscribe if isPersistentSubscribe is set. + + **Returns** + + AWSIoTPythonSDK.core.shadow.deviceShadow.deviceShadow object, which exposes the device shadow interface. + + """ + # Create and return a deviceShadow instance + return deviceShadow.deviceShadow(shadowName, isPersistentSubscribe, self._shadowManager) + # Shadow APIs are accessible in deviceShadow instance": + ### + # deviceShadow.shadowGet + # deviceShadow.shadowUpdate + # deviceShadow.shadowDelete + # deviceShadow.shadowRegisterDelta + # deviceShadow.shadowUnregisterDelta + +class AWSIoTMQTTThingJobsClient(_AWSIoTMQTTDelegatingClient): + + def __init__(self, clientID, thingName, QoS=0, protocolType=MQTTv3_1_1, useWebsocket=False, cleanSession=True, awsIoTMQTTClient=None): + """ + + The client class that specializes in handling jobs messages and accesses its functionality in AWS IoT over MQTT v3.1/3.1.1. + + It delegates to the AWS IoT MQTT Client and exposes jobs related operations. + It shares the same connection types, synchronous MQTT operations and partial on-top features + with the AWS IoT MQTT Client: + + - Auto reconnect/resubscribe + + Same as AWS IoT MQTT Client. + + - Progressive reconnect backoff + + Same as AWS IoT MQTT Client. + + - Offline publish requests queueing with draining + + Same as AWS IoT MQTT Client + + **Syntax** + + .. code:: python + + import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT + + # Create an AWS IoT MQTT Jobs Client using TLSv1.2 Mutual Authentication + myAWSIoTMQTTJobsClient = AWSIoTPyMQTT.AWSIoTMQTTThingJobsClient("testIoTPySDK") + # Create an AWS IoT MQTT Jobs Client using Websocket SigV4 + myAWSIoTMQTTJobsClient = AWSIoTPyMQTT.AWSIoTMQTTThingJobsClient("testIoTPySDK", useWebsocket=True) + + **Parameters** + + *clientID* - String that denotes the client identifier and client token for jobs requests + If empty string is provided, client id for this connection will be randomly generated + on server side. If an awsIotMQTTClient is specified, this will not override the client ID + for the existing MQTT connection and only impact the client token for jobs request payloads + + *thingName* - String that represents the thingName used to send requests to proper topics and subscribe + to proper topics. + + *QoS* - QoS used for all requests sent through this client + + *awsIoTMQTTClient* - An instance of AWSIoTMQTTClient to use if not None. If not None, clientID, protocolType, useWebSocket, + and cleanSession parameters are not used. Caller is expected to invoke connect() prior to calling the pub/sub methods on this client. + + *protocolType* - MQTT version in use for this connection. Could be :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1` or :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1_1` + + *useWebsocket* - Boolean that denotes enabling MQTT over Websocket SigV4 or not. + + **Returns** + + AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTJobsClient object + + """ + # AWSIOTMQTTClient instance + super(AWSIoTMQTTThingJobsClient, self).__init__(clientID, protocolType, useWebsocket, cleanSession, awsIoTMQTTClient) + self._thingJobManager = thingJobManager.thingJobManager(thingName, clientID) + self._QoS = QoS + + def createJobSubscription(self, callback, jobExecutionType=jobExecutionTopicType.JOB_WILDCARD_TOPIC, jobReplyType=jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId=None): + """ + **Description** + + Synchronously creates an MQTT subscription to a jobs related topic based on the provided arguments + + **Syntax** + + .. code:: python + + #Subscribe to notify-next topic to monitor change in job referred to by $next + myAWSIoTMQTTJobsClient.createJobSubscription(callback, jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC) + #Subscribe to notify topic to monitor changes to jobs in pending list + myAWSIoTMQTTJobsClient.createJobSubscription(callback, jobExecutionTopicType.JOB_NOTIFY_TOPIC) + #Subscribe to receive messages for job execution updates + myAWSIoTMQTTJobsClient.createJobSubscription(callback, jobExecutionTopicType.JOB_UPDATE_TOPIC, jobExecutionTopicReplyType.JOB_ACCEPTED_REPLY_TYPE) + #Subscribe to receive messages for describing a job execution + myAWSIoTMQTTJobsClient.createJobSubscription(callback, jobExecutionTopicType.JOB_DESCRIBE_TOPIC, jobExecutionTopicReplyType.JOB_ACCEPTED_REPLY_TYPE, jobId) + + **Parameters** + + *callback* - Function to be called when a new message for the subscribed job topic + comes in. Should be in form :code:`customCallback(client, userdata, message)`, where + :code:`message` contains :code:`topic` and :code:`payload`. Note that :code:`client` and :code:`userdata` are + here just to be aligned with the underneath Paho callback function signature. These fields are pending to be + deprecated and should not be depended on. + + *jobExecutionType* - Member of the jobExecutionTopicType class specifying the jobs topic to subscribe to + Defaults to jobExecutionTopicType.JOB_WILDCARD_TOPIC + + *jobReplyType* - Member of the jobExecutionTopicReplyType class specifying the (optional) reply sub-topic to subscribe to + Defaults to jobExecutionTopicReplyType.JOB_REQUEST_TYPE which indicates the subscription isn't intended for a jobs reply topic + + *jobId* - JobId string if the topic type requires one. + Defaults to None + + **Returns** + + True if the subscribe attempt succeeded. False if failed. + + """ + topic = self._thingJobManager.getJobTopic(jobExecutionType, jobReplyType, jobId) + return self._AWSIoTMQTTClient.subscribe(topic, self._QoS, callback) + + def createJobSubscriptionAsync(self, ackCallback, callback, jobExecutionType=jobExecutionTopicType.JOB_WILDCARD_TOPIC, jobReplyType=jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId=None): + """ + **Description** + + Asynchronously creates an MQTT subscription to a jobs related topic based on the provided arguments + + **Syntax** + + .. code:: python + + #Subscribe to notify-next topic to monitor change in job referred to by $next + myAWSIoTMQTTJobsClient.createJobSubscriptionAsync(callback, jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC) + #Subscribe to notify topic to monitor changes to jobs in pending list + myAWSIoTMQTTJobsClient.createJobSubscriptionAsync(callback, jobExecutionTopicType.JOB_NOTIFY_TOPIC) + #Subscribe to receive messages for job execution updates + myAWSIoTMQTTJobsClient.createJobSubscriptionAsync(callback, jobExecutionTopicType.JOB_UPDATE_TOPIC, jobExecutionTopicReplyType.JOB_ACCEPTED_REPLY_TYPE) + #Subscribe to receive messages for describing a job execution + myAWSIoTMQTTJobsClient.createJobSubscriptionAsync(callback, jobExecutionTopicType.JOB_DESCRIBE_TOPIC, jobExecutionTopicReplyType.JOB_ACCEPTED_REPLY_TYPE, jobId) + + **Parameters** + + *ackCallback* - Callback to be invoked when the client receives a SUBACK. Should be in form + :code:`customCallback(mid, data)`, where :code:`mid` is the packet id for the disconnect request and + :code:`data` is the granted QoS for this subscription. + + *callback* - Function to be called when a new message for the subscribed job topic + comes in. Should be in form :code:`customCallback(client, userdata, message)`, where + :code:`message` contains :code:`topic` and :code:`payload`. Note that :code:`client` and :code:`userdata` are + here just to be aligned with the underneath Paho callback function signature. These fields are pending to be + deprecated and should not be depended on. + + *jobExecutionType* - Member of the jobExecutionTopicType class specifying the jobs topic to subscribe to + Defaults to jobExecutionTopicType.JOB_WILDCARD_TOPIC + + *jobReplyType* - Member of the jobExecutionTopicReplyType class specifying the (optional) reply sub-topic to subscribe to + Defaults to jobExecutionTopicReplyType.JOB_REQUEST_TYPE which indicates the subscription isn't intended for a jobs reply topic + + *jobId* - JobId of the topic if the topic type requires one. + Defaults to None + + **Returns** + + Subscribe request packet id, for tracking purpose in the corresponding callback. + + """ + topic = self._thingJobManager.getJobTopic(jobExecutionType, jobReplyType, jobId) + return self._AWSIoTMQTTClient.subscribeAsync(topic, self._QoS, ackCallback, callback) + + def sendJobsQuery(self, jobExecTopicType, jobId=None): + """ + **Description** + + Publishes an MQTT jobs related request for a potentially specific jobId (or wildcard) + + **Syntax** + + .. code:: python + + #send a request to describe the next job + myAWSIoTMQTTJobsClient.sendJobsQuery(jobExecutionTopicType.JOB_DESCRIBE_TOPIC, '$next') + #send a request to get list of pending jobs + myAWSIoTMQTTJobsClient.sendJobsQuery(jobExecutionTopicType.JOB_GET_PENDING_TOPIC) + + **Parameters** + + *jobExecutionType* - Member of the jobExecutionTopicType class that correlates the jobs topic to publish to + + *jobId* - JobId string if the topic type requires one. + Defaults to None + + **Returns** + + True if the publish request has been sent to paho. False if the request did not reach paho. + + """ + topic = self._thingJobManager.getJobTopic(jobExecTopicType, jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId) + payload = self._thingJobManager.serializeClientTokenPayload() + return self._AWSIoTMQTTClient.publish(topic, payload, self._QoS) + + def sendJobsStartNext(self, statusDetails=None, stepTimeoutInMinutes=None): + """ + **Description** + + Publishes an MQTT message to the StartNextJobExecution topic. This will attempt to get the next pending + job execution and change its status to IN_PROGRESS. + + **Syntax** + + .. code:: python + + #Start next job (set status to IN_PROGRESS) and update with optional statusDetails + myAWSIoTMQTTJobsClient.sendJobsStartNext({'StartedBy': 'myClientId'}) + + **Parameters** + + *statusDetails* - Dictionary containing the key value pairs to use for the status details of the job execution + + *stepTimeoutInMinutes - Specifies the amount of time this device has to finish execution of this job. + + **Returns** + + True if the publish request has been sent to paho. False if the request did not reach paho. + + """ + topic = self._thingJobManager.getJobTopic(jobExecutionTopicType.JOB_START_NEXT_TOPIC, jobExecutionTopicReplyType.JOB_REQUEST_TYPE) + payload = self._thingJobManager.serializeStartNextPendingJobExecutionPayload(statusDetails, stepTimeoutInMinutes) + return self._AWSIoTMQTTClient.publish(topic, payload, self._QoS) + + def sendJobsUpdate(self, jobId, status, statusDetails=None, expectedVersion=0, executionNumber=0, includeJobExecutionState=False, includeJobDocument=False, stepTimeoutInMinutes=None): + """ + **Description** + + Publishes an MQTT message to a corresponding job execution specific topic to update its status according to the parameters. + Can be used to change a job from QUEUED to IN_PROGRESS to SUCEEDED or FAILED. + + **Syntax** + + .. code:: python + + #Update job with id 'jobId123' to succeeded state, specifying new status details, with expectedVersion=1, executionNumber=2. + #For the response, include job execution state and not the job document + myAWSIoTMQTTJobsClient.sendJobsUpdate('jobId123', jobExecutionStatus.JOB_EXECUTION_SUCCEEDED, statusDetailsMap, 1, 2, True, False) + + + #Update job with id 'jobId456' to failed state + myAWSIoTMQTTJobsClient.sendJobsUpdate('jobId456', jobExecutionStatus.JOB_EXECUTION_FAILED) + + **Parameters** + + *jobId* - JobID String of the execution to update the status of + + *status* - job execution status to change the job execution to. Member of jobExecutionStatus + + *statusDetails* - new status details to set on the job execution + + *expectedVersion* - The expected current version of the job execution. IoT jobs increments expectedVersion each time you update the job execution. + If the version of the job execution stored in Jobs does not match, the update is rejected with a VersionMismatch error, and an ErrorResponse + that contains the current job execution status data is returned. (This makes it unnecessary to perform a separate DescribeJobExecution request + n order to obtain the job execution status data.) + + *executionNumber* - A number that identifies a particular job execution on a particular device. If not specified, the latest job execution is used. + + *includeJobExecutionState* - When included and set to True, the response contains the JobExecutionState field. The default is False. + + *includeJobDocument* - When included and set to True, the response contains the JobDocument. The default is False. + + *stepTimeoutInMinutes - Specifies the amount of time this device has to finish execution of this job. + + **Returns** + + True if the publish request has been sent to paho. False if the request did not reach paho. + + """ + topic = self._thingJobManager.getJobTopic(jobExecutionTopicType.JOB_UPDATE_TOPIC, jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId) + payload = self._thingJobManager.serializeJobExecutionUpdatePayload(status, statusDetails, expectedVersion, executionNumber, includeJobExecutionState, includeJobDocument, stepTimeoutInMinutes) + return self._AWSIoTMQTTClient.publish(topic, payload, self._QoS) + + def sendJobsDescribe(self, jobId, executionNumber=0, includeJobDocument=True): + """ + **Description** + + Publishes a method to the describe topic for a particular job. + + **Syntax** + + .. code:: python + + #Describe job with id 'jobId1' of any executionNumber, job document will be included in response + myAWSIoTMQTTJobsClient.sendJobsDescribe('jobId1') + + #Describe job with id 'jobId2', with execution number of 2, and includeJobDocument in the response + myAWSIoTMQTTJobsClient.sendJobsDescribe('jobId2', 2, True) + + **Parameters** + + *jobId* - jobID to describe. This is allowed to be a wildcard such as '$next' + + *executionNumber* - A number that identifies a particular job execution on a particular device. If not specified, the latest job execution is used. + + *includeJobDocument* - When included and set to True, the response contains the JobDocument. + + **Returns** + + True if the publish request has been sent to paho. False if the request did not reach paho. + + """ + topic = self._thingJobManager.getJobTopic(jobExecutionTopicType.JOB_DESCRIBE_TOPIC, jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId) + payload = self._thingJobManager.serializeDescribeJobExecutionPayload(executionNumber, includeJobDocument) + return self._AWSIoTMQTTClient.publish(topic, payload, self._QoS) diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/__init__.py new file mode 100644 index 0000000..eda1560 --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/__init__.py @@ -0,0 +1,3 @@ +__version__ = "1.4.8" + + diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/__init__.pyc b/aws-iot-device-sdk-python/AWSIoTPythonSDK/__init__.pyc new file mode 100644 index 0000000..67b596c Binary files /dev/null and b/aws-iot-device-sdk-python/AWSIoTPythonSDK/__init__.pyc differ diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/greengrass/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/greengrass/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/greengrass/discovery/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/greengrass/discovery/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/greengrass/discovery/models.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/greengrass/discovery/models.py new file mode 100644 index 0000000..ed8256d --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/greengrass/discovery/models.py @@ -0,0 +1,466 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import json + + +KEY_GROUP_LIST = "GGGroups" +KEY_GROUP_ID = "GGGroupId" +KEY_CORE_LIST = "Cores" +KEY_CORE_ARN = "thingArn" +KEY_CA_LIST = "CAs" +KEY_CONNECTIVITY_INFO_LIST = "Connectivity" +KEY_CONNECTIVITY_INFO_ID = "Id" +KEY_HOST_ADDRESS = "HostAddress" +KEY_PORT_NUMBER = "PortNumber" +KEY_METADATA = "Metadata" + + +class ConnectivityInfo(object): + """ + + Class the stores one set of the connectivity information. + This is the data model for easy access to the discovery information from the discovery request function call. No + need to call directly from user scripts. + + """ + + def __init__(self, id, host, port, metadata): + self._id = id + self._host = host + self._port = port + self._metadata = metadata + + @property + def id(self): + """ + + Connectivity Information Id. + + """ + return self._id + + @property + def host(self): + """ + + Host address. + + """ + return self._host + + @property + def port(self): + """ + + Port number. + + """ + return self._port + + @property + def metadata(self): + """ + + Metadata string. + + """ + return self._metadata + + +class CoreConnectivityInfo(object): + """ + + Class that stores the connectivity information for a Greengrass core. + This is the data model for easy access to the discovery information from the discovery request function call. No + need to call directly from user scripts. + + """ + + def __init__(self, coreThingArn, groupId): + self._core_thing_arn = coreThingArn + self._group_id = groupId + self._connectivity_info_dict = dict() + + @property + def coreThingArn(self): + """ + + Thing arn for this Greengrass core. + + """ + return self._core_thing_arn + + @property + def groupId(self): + """ + + Greengrass group id that this Greengrass core belongs to. + + """ + return self._group_id + + @property + def connectivityInfoList(self): + """ + + The list of connectivity information that this Greengrass core has. + + """ + return list(self._connectivity_info_dict.values()) + + def getConnectivityInfo(self, id): + """ + + **Description** + + Used for quickly accessing a certain set of connectivity information by id. + + **Syntax** + + .. code:: python + + myCoreConnectivityInfo.getConnectivityInfo("CoolId") + + **Parameters** + + *id* - The id for the desired connectivity information. + + **Return** + + :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.ConnectivityInfo` object. + + """ + return self._connectivity_info_dict.get(id) + + def appendConnectivityInfo(self, connectivityInfo): + """ + + **Description** + + Used for adding a new set of connectivity information to the list for this Greengrass core. This is used by the + SDK internally. No need to call directly from user scripts. + + **Syntax** + + .. code:: python + + myCoreConnectivityInfo.appendConnectivityInfo(newInfo) + + **Parameters** + + *connectivityInfo* - :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.ConnectivityInfo` object. + + **Returns** + + None + + """ + self._connectivity_info_dict[connectivityInfo.id] = connectivityInfo + + +class GroupConnectivityInfo(object): + """ + + Class that stores the connectivity information for a specific Greengrass group. + This is the data model for easy access to the discovery information from the discovery request function call. No + need to call directly from user scripts. + + """ + def __init__(self, groupId): + self._group_id = groupId + self._core_connectivity_info_dict = dict() + self._ca_list = list() + + @property + def groupId(self): + """ + + Id for this Greengrass group. + + """ + return self._group_id + + @property + def coreConnectivityInfoList(self): + """ + + A list of Greengrass cores + (:code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` object) that belong to this + Greengrass group. + + """ + return list(self._core_connectivity_info_dict.values()) + + @property + def caList(self): + """ + + A list of CA content strings for this Greengrass group. + + """ + return self._ca_list + + def getCoreConnectivityInfo(self, coreThingArn): + """ + + **Description** + + Used to retrieve the corresponding :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` + object by core thing arn. + + **Syntax** + + .. code:: python + + myGroupConnectivityInfo.getCoreConnectivityInfo("YourOwnArnString") + + **Parameters** + + coreThingArn - Thing arn for the desired Greengrass core. + + **Returns** + + :code:`AWSIoTPythonSDK.core.greengrass.discovery.CoreConnectivityInfo` object. + + """ + return self._core_connectivity_info_dict.get(coreThingArn) + + def appendCoreConnectivityInfo(self, coreConnectivityInfo): + """ + + **Description** + + Used to append new core connectivity information to this group connectivity information. This is used by the + SDK internally. No need to call directly from user scripts. + + **Syntax** + + .. code:: python + + myGroupConnectivityInfo.appendCoreConnectivityInfo(newCoreConnectivityInfo) + + **Parameters** + + *coreConnectivityInfo* - :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` object. + + **Returns** + + None + + """ + self._core_connectivity_info_dict[coreConnectivityInfo.coreThingArn] = coreConnectivityInfo + + def appendCa(self, ca): + """ + + **Description** + + Used to append new CA content string to this group connectivity information. This is used by the SDK internally. + No need to call directly from user scripts. + + **Syntax** + + .. code:: python + + myGroupConnectivityInfo.appendCa("CaContentString") + + **Parameters** + + *ca* - Group CA content string. + + **Returns** + + None + + """ + self._ca_list.append(ca) + + +class DiscoveryInfo(object): + """ + + Class that stores the discovery information coming back from the discovery request. + This is the data model for easy access to the discovery information from the discovery request function call. No + need to call directly from user scripts. + + """ + def __init__(self, rawJson): + self._raw_json = rawJson + + @property + def rawJson(self): + """ + + JSON response string that contains the discovery information. This is reserved in case users want to do + some process by themselves. + + """ + return self._raw_json + + def getAllCores(self): + """ + + **Description** + + Used to retrieve the list of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` + object for this discovery information. The retrieved cores could be from different Greengrass groups. This is + designed for uses who want to iterate through all available cores at the same time, regardless of which group + those cores are in. + + **Syntax** + + .. code:: python + + myDiscoveryInfo.getAllCores() + + **Parameters** + + None + + **Returns** + + List of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivtyInfo` object. + + """ + groups_list = self.getAllGroups() + core_list = list() + + for group in groups_list: + core_list.extend(group.coreConnectivityInfoList) + + return core_list + + def getAllCas(self): + """ + + **Description** + + Used to retrieve the list of :code:`(groupId, caContent)` pair for this discovery information. The retrieved + pairs could be from different Greengrass groups. This is designed for users who want to iterate through all + available cores/groups/CAs at the same time, regardless of which group those CAs belong to. + + **Syntax** + + .. code:: python + + myDiscoveryInfo.getAllCas() + + **Parameters** + + None + + **Returns** + + List of :code:`(groupId, caContent)` string pair, where :code:`caContent` is the CA content string and + :code:`groupId` is the group id that this CA belongs to. + + """ + group_list = self.getAllGroups() + ca_list = list() + + for group in group_list: + for ca in group.caList: + ca_list.append((group.groupId, ca)) + + return ca_list + + def getAllGroups(self): + """ + + **Description** + + Used to retrieve the list of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo` + object for this discovery information. This is designed for users who want to iterate through all available + groups that this Greengrass aware device (GGAD) belongs to. + + **Syntax** + + .. code:: python + + myDiscoveryInfo.getAllGroups() + + **Parameters** + + None + + **Returns** + + List of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo` object. + + """ + groups_dict = self.toObjectAtGroupLevel() + return list(groups_dict.values()) + + def toObjectAtGroupLevel(self): + """ + + **Description** + + Used to get a dictionary of Greengrass group discovery information, with group id string as key and the + corresponding :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo` object as the + value. This is designed for users who know exactly which group, which core and which set of connectivity info + they want to use for the Greengrass aware device to connect. + + **Syntax** + + .. code:: python + + # Get to the targeted connectivity information for a specific core in a specific group + groupLevelDiscoveryInfoObj = myDiscoveryInfo.toObjectAtGroupLevel() + groupConnectivityInfoObj = groupLevelDiscoveryInfoObj.toObjectAtGroupLevel("IKnowMyGroupId") + coreConnectivityInfoObj = groupConnectivityInfoObj.getCoreConnectivityInfo("IKnowMyCoreThingArn") + connectivityInfo = coreConnectivityInfoObj.getConnectivityInfo("IKnowMyConnectivityInfoSetId") + # Now retrieve the detailed information + caList = groupConnectivityInfoObj.caList + host = connectivityInfo.host + port = connectivityInfo.port + metadata = connectivityInfo.metadata + # Actual connecting logic follows... + + """ + groups_object = json.loads(self._raw_json) + groups_dict = dict() + + for group_object in groups_object[KEY_GROUP_LIST]: + group_info = self._decode_group_info(group_object) + groups_dict[group_info.groupId] = group_info + + return groups_dict + + def _decode_group_info(self, group_object): + group_id = group_object[KEY_GROUP_ID] + group_info = GroupConnectivityInfo(group_id) + + for core in group_object[KEY_CORE_LIST]: + core_info = self._decode_core_info(core, group_id) + group_info.appendCoreConnectivityInfo(core_info) + + for ca in group_object[KEY_CA_LIST]: + group_info.appendCa(ca) + + return group_info + + def _decode_core_info(self, core_object, group_id): + core_info = CoreConnectivityInfo(core_object[KEY_CORE_ARN], group_id) + + for connectivity_info_object in core_object[KEY_CONNECTIVITY_INFO_LIST]: + connectivity_info = ConnectivityInfo(connectivity_info_object[KEY_CONNECTIVITY_INFO_ID], + connectivity_info_object[KEY_HOST_ADDRESS], + connectivity_info_object[KEY_PORT_NUMBER], + connectivity_info_object.get(KEY_METADATA,'')) + core_info.appendConnectivityInfo(connectivity_info) + + return core_info diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/greengrass/discovery/providers.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/greengrass/discovery/providers.py new file mode 100644 index 0000000..646d79d --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/greengrass/discovery/providers.py @@ -0,0 +1,426 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryInvalidRequestException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryUnauthorizedException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryDataNotFoundException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryThrottlingException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryTimeoutException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryFailure +from AWSIoTPythonSDK.core.greengrass.discovery.models import DiscoveryInfo +from AWSIoTPythonSDK.core.protocol.connection.alpn import SSLContextBuilder +import re +import sys +import ssl +import time +import errno +import logging +import socket +import platform +if platform.system() == 'Windows': + EAGAIN = errno.WSAEWOULDBLOCK +else: + EAGAIN = errno.EAGAIN + + +class DiscoveryInfoProvider(object): + + REQUEST_TYPE_PREFIX = "GET " + PAYLOAD_PREFIX = "/greengrass/discover/thing/" + PAYLOAD_SUFFIX = " HTTP/1.1\r\n" # Space in the front + HOST_PREFIX = "Host: " + HOST_SUFFIX = "\r\n\r\n" + HTTP_PROTOCOL = r"HTTP/1.1 " + CONTENT_LENGTH = r"content-length: " + CONTENT_LENGTH_PATTERN = CONTENT_LENGTH + r"([0-9]+)\r\n" + HTTP_RESPONSE_CODE_PATTERN = HTTP_PROTOCOL + r"([0-9]+) " + + HTTP_SC_200 = "200" + HTTP_SC_400 = "400" + HTTP_SC_401 = "401" + HTTP_SC_404 = "404" + HTTP_SC_429 = "429" + + LOW_LEVEL_RC_COMPLETE = 0 + LOW_LEVEL_RC_TIMEOUT = -1 + + _logger = logging.getLogger(__name__) + + def __init__(self, caPath="", certPath="", keyPath="", host="", port=8443, timeoutSec=120): + """ + + The class that provides functionality to perform a Greengrass discovery process to the cloud. + + Users can perform Greengrass discovery process for a specific Greengrass aware device to retrieve + connectivity/identity information of Greengrass cores within the same group. + + **Syntax** + + .. code:: python + + from AWSIoTPythonSDK.core.greengrass.discovery.providers import DiscoveryInfoProvider + + # Create a discovery information provider + myDiscoveryInfoProvider = DiscoveryInfoProvider() + # Create a discovery information provider with custom configuration + myDiscoveryInfoProvider = DiscoveryInfoProvider(caPath=myCAPath, certPath=myCertPath, keyPath=myKeyPath, host=myHost, timeoutSec=myTimeoutSec) + + **Parameters** + + *caPath* - Path to read the root CA file. + + *certPath* - Path to read the certificate file. + + *keyPath* - Path to read the private key file. + + *host* - String that denotes the host name of the user-specific AWS IoT endpoint. + + *port* - Integer that denotes the port number to connect to. For discovery purpose, it is 8443 by default. + + *timeoutSec* - Time out configuration in seconds to consider a discovery request sending/response waiting has + been timed out. + + **Returns** + + AWSIoTPythonSDK.core.greengrass.discovery.providers.DiscoveryInfoProvider object + + """ + self._ca_path = caPath + self._cert_path = certPath + self._key_path = keyPath + self._host = host + self._port = port + self._timeout_sec = timeoutSec + self._expected_exception_map = { + self.HTTP_SC_400 : DiscoveryInvalidRequestException(), + self.HTTP_SC_401 : DiscoveryUnauthorizedException(), + self.HTTP_SC_404 : DiscoveryDataNotFoundException(), + self.HTTP_SC_429 : DiscoveryThrottlingException() + } + + def configureEndpoint(self, host, port=8443): + """ + + **Description** + + Used to configure the host address and port number for the discovery request to hit. Should be called before + the discovery request happens. + + **Syntax** + + .. code:: python + + # Using default port configuration, 8443 + myDiscoveryInfoProvider.configureEndpoint(host="prefix.iot.us-east-1.amazonaws.com") + # Customize port configuration + myDiscoveryInfoProvider.configureEndpoint(host="prefix.iot.us-east-1.amazonaws.com", port=8888) + + **Parameters** + + *host* - String that denotes the host name of the user-specific AWS IoT endpoint. + + *port* - Integer that denotes the port number to connect to. For discovery purpose, it is 8443 by default. + + **Returns** + + None + + """ + self._host = host + self._port = port + + def configureCredentials(self, caPath, certPath, keyPath): + """ + + **Description** + + Used to configure the credentials for discovery request. Should be called before the discovery request happens. + + **Syntax** + + .. code:: python + + myDiscoveryInfoProvider.configureCredentials("my/ca/path", "my/cert/path", "my/key/path") + + **Parameters** + + *caPath* - Path to read the root CA file. + + *certPath* - Path to read the certificate file. + + *keyPath* - Path to read the private key file. + + **Returns** + + None + + """ + self._ca_path = caPath + self._cert_path = certPath + self._key_path = keyPath + + def configureTimeout(self, timeoutSec): + """ + + **Description** + + Used to configure the time out in seconds for discovery request sending/response waiting. Should be called before + the discovery request happens. + + **Syntax** + + .. code:: python + + # Configure the time out for discovery to be 10 seconds + myDiscoveryInfoProvider.configureTimeout(10) + + **Parameters** + + *timeoutSec* - Time out configuration in seconds to consider a discovery request sending/response waiting has + been timed out. + + **Returns** + + None + + """ + self._timeout_sec = timeoutSec + + def discover(self, thingName): + """ + + **Description** + + Perform the discovery request for the given Greengrass aware device thing name. + + **Syntax** + + .. code:: python + + myDiscoveryInfoProvider.discover(thingName="myGGAD") + + **Parameters** + + *thingName* - Greengrass aware device thing name. + + **Returns** + + :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.DiscoveryInfo` object. + + """ + self._logger.info("Starting discover request...") + self._logger.info("Endpoint: " + self._host + ":" + str(self._port)) + self._logger.info("Target thing: " + thingName) + sock = self._create_tcp_connection() + ssl_sock = self._create_ssl_connection(sock) + self._raise_on_timeout(self._send_discovery_request(ssl_sock, thingName)) + status_code, response_body = self._receive_discovery_response(ssl_sock) + + return self._raise_if_not_200(status_code, response_body) + + def _create_tcp_connection(self): + self._logger.debug("Creating tcp connection...") + try: + if (sys.version_info[0] == 2 and sys.version_info[1] < 7) or (sys.version_info[0] == 3 and sys.version_info[1] < 2): + sock = socket.create_connection((self._host, self._port)) + else: + sock = socket.create_connection((self._host, self._port), source_address=("", 0)) + return sock + except socket.error as err: + if err.errno != errno.EINPROGRESS and err.errno != errno.EWOULDBLOCK and err.errno != EAGAIN: + raise + self._logger.debug("Created tcp connection.") + + def _create_ssl_connection(self, sock): + self._logger.debug("Creating ssl connection...") + + ssl_protocol_version = ssl.PROTOCOL_SSLv23 + + if self._port == 443: + ssl_context = SSLContextBuilder()\ + .with_ca_certs(self._ca_path)\ + .with_cert_key_pair(self._cert_path, self._key_path)\ + .with_cert_reqs(ssl.CERT_REQUIRED)\ + .with_check_hostname(True)\ + .with_ciphers(None)\ + .with_alpn_protocols(['x-amzn-http-ca'])\ + .build() + ssl_sock = ssl_context.wrap_socket(sock, server_hostname=self._host, do_handshake_on_connect=False) + ssl_sock.do_handshake() + else: + ssl_sock = ssl.wrap_socket(sock, + certfile=self._cert_path, + keyfile=self._key_path, + ca_certs=self._ca_path, + cert_reqs=ssl.CERT_REQUIRED, + ssl_version=ssl_protocol_version) + + self._logger.debug("Matching host name...") + if sys.version_info[0] < 3 or (sys.version_info[0] == 3 and sys.version_info[1] < 2): + self._tls_match_hostname(ssl_sock) + else: + ssl.match_hostname(ssl_sock.getpeercert(), self._host) + + return ssl_sock + + def _tls_match_hostname(self, ssl_sock): + try: + cert = ssl_sock.getpeercert() + except AttributeError: + # the getpeercert can throw Attribute error: object has no attribute 'peer_certificate' + # Don't let that crash the whole client. See also: http://bugs.python.org/issue13721 + raise ssl.SSLError('Not connected') + + san = cert.get('subjectAltName') + if san: + have_san_dns = False + for (key, value) in san: + if key == 'DNS': + have_san_dns = True + if self._host_matches_cert(self._host.lower(), value.lower()) == True: + return + if key == 'IP Address': + have_san_dns = True + if value.lower() == self._host.lower(): + return + + if have_san_dns: + # Only check subject if subjectAltName dns not found. + raise ssl.SSLError('Certificate subject does not match remote hostname.') + subject = cert.get('subject') + if subject: + for ((key, value),) in subject: + if key == 'commonName': + if self._host_matches_cert(self._host.lower(), value.lower()) == True: + return + + raise ssl.SSLError('Certificate subject does not match remote hostname.') + + def _host_matches_cert(self, host, cert_host): + if cert_host[0:2] == "*.": + if cert_host.count("*") != 1: + return False + + host_match = host.split(".", 1)[1] + cert_match = cert_host.split(".", 1)[1] + if host_match == cert_match: + return True + else: + return False + else: + if host == cert_host: + return True + else: + return False + + def _send_discovery_request(self, ssl_sock, thing_name): + request = self.REQUEST_TYPE_PREFIX + \ + self.PAYLOAD_PREFIX + \ + thing_name + \ + self.PAYLOAD_SUFFIX + \ + self.HOST_PREFIX + \ + self._host + ":" + str(self._port) + \ + self.HOST_SUFFIX + self._logger.debug("Sending discover request: " + request) + + start_time = time.time() + desired_length_to_write = len(request) + actual_length_written = 0 + while True: + try: + length_written = ssl_sock.write(request.encode("utf-8")) + actual_length_written += length_written + except socket.error as err: + if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE: + pass + if actual_length_written == desired_length_to_write: + return self.LOW_LEVEL_RC_COMPLETE + if start_time + self._timeout_sec < time.time(): + return self.LOW_LEVEL_RC_TIMEOUT + + def _receive_discovery_response(self, ssl_sock): + self._logger.debug("Receiving discover response header...") + rc1, response_header = self._receive_until(ssl_sock, self._got_two_crlfs) + status_code, body_length = self._handle_discovery_response_header(rc1, response_header.decode("utf-8")) + + self._logger.debug("Receiving discover response body...") + rc2, response_body = self._receive_until(ssl_sock, self._got_enough_bytes, body_length) + response_body = self._handle_discovery_response_body(rc2, response_body.decode("utf-8")) + + return status_code, response_body + + def _receive_until(self, ssl_sock, criteria_function, extra_data=None): + start_time = time.time() + response = bytearray() + number_bytes_read = 0 + while True: # Python does not have do-while + try: + response.append(self._convert_to_int_py3(ssl_sock.read(1))) + number_bytes_read += 1 + except socket.error as err: + if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE: + pass + + if criteria_function((number_bytes_read, response, extra_data)): + return self.LOW_LEVEL_RC_COMPLETE, response + if start_time + self._timeout_sec < time.time(): + return self.LOW_LEVEL_RC_TIMEOUT, response + + def _convert_to_int_py3(self, input_char): + try: + return ord(input_char) + except: + return input_char + + def _got_enough_bytes(self, data): + number_bytes_read, response, target_length = data + return number_bytes_read == int(target_length) + + def _got_two_crlfs(self, data): + number_bytes_read, response, extra_data_unused = data + number_of_crlf = 2 + has_enough_bytes = number_bytes_read > number_of_crlf * 2 - 1 + if has_enough_bytes: + end_of_received = response[number_bytes_read - number_of_crlf * 2 : number_bytes_read] + expected_end_of_response = b"\r\n" * number_of_crlf + return end_of_received == expected_end_of_response + else: + return False + + def _handle_discovery_response_header(self, rc, response): + self._raise_on_timeout(rc) + http_status_code_matcher = re.compile(self.HTTP_RESPONSE_CODE_PATTERN) + http_status_code_matched_groups = http_status_code_matcher.match(response) + content_length_matcher = re.compile(self.CONTENT_LENGTH_PATTERN) + content_length_matched_groups = content_length_matcher.search(response) + return http_status_code_matched_groups.group(1), content_length_matched_groups.group(1) + + def _handle_discovery_response_body(self, rc, response): + self._raise_on_timeout(rc) + return response + + def _raise_on_timeout(self, rc): + if rc == self.LOW_LEVEL_RC_TIMEOUT: + raise DiscoveryTimeoutException() + + def _raise_if_not_200(self, status_code, response_body): # response_body here is str in Py3 + if status_code != self.HTTP_SC_200: + expected_exception = self._expected_exception_map.get(status_code) + if expected_exception: + raise expected_exception + else: + raise DiscoveryFailure(response_body) + return DiscoveryInfo(response_body) diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/jobs/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/jobs/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/jobs/thingJobManager.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/jobs/thingJobManager.py new file mode 100644 index 0000000..d2396b2 --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/jobs/thingJobManager.py @@ -0,0 +1,156 @@ +# /* +# * Copyright 2010-2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import json + +_BASE_THINGS_TOPIC = "$aws/things/" +_NOTIFY_OPERATION = "notify" +_NOTIFY_NEXT_OPERATION = "notify-next" +_GET_OPERATION = "get" +_START_NEXT_OPERATION = "start-next" +_WILDCARD_OPERATION = "+" +_UPDATE_OPERATION = "update" +_ACCEPTED_REPLY = "accepted" +_REJECTED_REPLY = "rejected" +_WILDCARD_REPLY = "#" + +#Members of this enum are tuples +_JOB_ID_REQUIRED_INDEX = 1 +_JOB_OPERATION_INDEX = 2 + +_STATUS_KEY = 'status' +_STATUS_DETAILS_KEY = 'statusDetails' +_EXPECTED_VERSION_KEY = 'expectedVersion' +_EXEXCUTION_NUMBER_KEY = 'executionNumber' +_INCLUDE_JOB_EXECUTION_STATE_KEY = 'includeJobExecutionState' +_INCLUDE_JOB_DOCUMENT_KEY = 'includeJobDocument' +_CLIENT_TOKEN_KEY = 'clientToken' +_STEP_TIMEOUT_IN_MINUTES_KEY = 'stepTimeoutInMinutes' + +#The type of job topic. +class jobExecutionTopicType(object): + JOB_UNRECOGNIZED_TOPIC = (0, False, '') + JOB_GET_PENDING_TOPIC = (1, False, _GET_OPERATION) + JOB_START_NEXT_TOPIC = (2, False, _START_NEXT_OPERATION) + JOB_DESCRIBE_TOPIC = (3, True, _GET_OPERATION) + JOB_UPDATE_TOPIC = (4, True, _UPDATE_OPERATION) + JOB_NOTIFY_TOPIC = (5, False, _NOTIFY_OPERATION) + JOB_NOTIFY_NEXT_TOPIC = (6, False, _NOTIFY_NEXT_OPERATION) + JOB_WILDCARD_TOPIC = (7, False, _WILDCARD_OPERATION) + +#Members of this enum are tuples +_JOB_SUFFIX_INDEX = 1 +#The type of reply topic, or #JOB_REQUEST_TYPE for topics that are not replies. +class jobExecutionTopicReplyType(object): + JOB_UNRECOGNIZED_TOPIC_TYPE = (0, '') + JOB_REQUEST_TYPE = (1, '') + JOB_ACCEPTED_REPLY_TYPE = (2, '/' + _ACCEPTED_REPLY) + JOB_REJECTED_REPLY_TYPE = (3, '/' + _REJECTED_REPLY) + JOB_WILDCARD_REPLY_TYPE = (4, '/' + _WILDCARD_REPLY) + +_JOB_STATUS_INDEX = 1 +class jobExecutionStatus(object): + JOB_EXECUTION_STATUS_NOT_SET = (0, None) + JOB_EXECUTION_QUEUED = (1, 'QUEUED') + JOB_EXECUTION_IN_PROGRESS = (2, 'IN_PROGRESS') + JOB_EXECUTION_FAILED = (3, 'FAILED') + JOB_EXECUTION_SUCCEEDED = (4, 'SUCCEEDED') + JOB_EXECUTION_CANCELED = (5, 'CANCELED') + JOB_EXECUTION_REJECTED = (6, 'REJECTED') + JOB_EXECUTION_UNKNOWN_STATUS = (99, None) + +def _getExecutionStatus(jobStatus): + try: + return jobStatus[_JOB_STATUS_INDEX] + except KeyError: + return None + +def _isWithoutJobIdTopicType(srcJobExecTopicType): + return (srcJobExecTopicType == jobExecutionTopicType.JOB_GET_PENDING_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_START_NEXT_TOPIC + or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC) + +class thingJobManager: + def __init__(self, thingName, clientToken = None): + self._thingName = thingName + self._clientToken = clientToken + + def getJobTopic(self, srcJobExecTopicType, srcJobExecTopicReplyType=jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId=None): + if self._thingName is None: + return None + + #Verify topics that only support request type, actually have request type specified for reply + if (srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC) and srcJobExecTopicReplyType != jobExecutionTopicReplyType.JOB_REQUEST_TYPE: + return None + + #Verify topics that explicitly do not want a job ID do not have one specified + if (jobId is not None and _isWithoutJobIdTopicType(srcJobExecTopicType)): + return None + + #Verify job ID is present if the topic requires one + if jobId is None and srcJobExecTopicType[_JOB_ID_REQUIRED_INDEX]: + return None + + #Ensure the job operation is a non-empty string + if srcJobExecTopicType[_JOB_OPERATION_INDEX] == '': + return None + + if srcJobExecTopicType[_JOB_ID_REQUIRED_INDEX]: + return '{0}{1}/jobs/{2}/{3}{4}'.format(_BASE_THINGS_TOPIC, self._thingName, str(jobId), srcJobExecTopicType[_JOB_OPERATION_INDEX], srcJobExecTopicReplyType[_JOB_SUFFIX_INDEX]) + elif srcJobExecTopicType == jobExecutionTopicType.JOB_WILDCARD_TOPIC: + return '{0}{1}/jobs/#'.format(_BASE_THINGS_TOPIC, self._thingName) + else: + return '{0}{1}/jobs/{2}{3}'.format(_BASE_THINGS_TOPIC, self._thingName, srcJobExecTopicType[_JOB_OPERATION_INDEX], srcJobExecTopicReplyType[_JOB_SUFFIX_INDEX]) + + def serializeJobExecutionUpdatePayload(self, status, statusDetails=None, expectedVersion=0, executionNumber=0, includeJobExecutionState=False, includeJobDocument=False, stepTimeoutInMinutes=None): + executionStatus = _getExecutionStatus(status) + if executionStatus is None: + return None + payload = {_STATUS_KEY: executionStatus} + if statusDetails: + payload[_STATUS_DETAILS_KEY] = statusDetails + if expectedVersion > 0: + payload[_EXPECTED_VERSION_KEY] = str(expectedVersion) + if executionNumber > 0: + payload[_EXEXCUTION_NUMBER_KEY] = str(executionNumber) + if includeJobExecutionState: + payload[_INCLUDE_JOB_EXECUTION_STATE_KEY] = True + if includeJobDocument: + payload[_INCLUDE_JOB_DOCUMENT_KEY] = True + if self._clientToken is not None: + payload[_CLIENT_TOKEN_KEY] = self._clientToken + if stepTimeoutInMinutes is not None: + payload[_STEP_TIMEOUT_IN_MINUTES_KEY] = stepTimeoutInMinutes + return json.dumps(payload) + + def serializeDescribeJobExecutionPayload(self, executionNumber=0, includeJobDocument=True): + payload = {_INCLUDE_JOB_DOCUMENT_KEY: includeJobDocument} + if executionNumber > 0: + payload[_EXEXCUTION_NUMBER_KEY] = executionNumber + if self._clientToken is not None: + payload[_CLIENT_TOKEN_KEY] = self._clientToken + return json.dumps(payload) + + def serializeStartNextPendingJobExecutionPayload(self, statusDetails=None, stepTimeoutInMinutes=None): + payload = {} + if self._clientToken is not None: + payload[_CLIENT_TOKEN_KEY] = self._clientToken + if statusDetails is not None: + payload[_STATUS_DETAILS_KEY] = statusDetails + if stepTimeoutInMinutes is not None: + payload[_STEP_TIMEOUT_IN_MINUTES_KEY] = stepTimeoutInMinutes + return json.dumps(payload) + + def serializeClientTokenPayload(self): + return json.dumps({_CLIENT_TOKEN_KEY: self._clientToken}) if self._clientToken is not None else '{}' diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/connection/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/connection/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/connection/alpn.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/connection/alpn.py new file mode 100644 index 0000000..8da98dd --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/connection/alpn.py @@ -0,0 +1,63 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +try: + import ssl +except: + ssl = None + + +class SSLContextBuilder(object): + + def __init__(self): + self.check_supportability() + self._ssl_context = ssl.create_default_context() + + def check_supportability(self): + if ssl is None: + raise RuntimeError("This platform has no SSL/TLS.") + if not hasattr(ssl, "SSLContext"): + raise NotImplementedError("This platform does not support SSLContext. Python 2.7.10+/3.5+ is required.") + if not hasattr(ssl.SSLContext, "set_alpn_protocols"): + raise NotImplementedError("This platform does not support ALPN as TLS extensions. Python 2.7.10+/3.5+ is required.") + + def with_ca_certs(self, ca_certs): + self._ssl_context.load_verify_locations(ca_certs) + return self + + def with_cert_key_pair(self, cert_file, key_file): + self._ssl_context.load_cert_chain(cert_file, key_file) + return self + + def with_cert_reqs(self, cert_reqs): + self._ssl_context.verify_mode = cert_reqs + return self + + def with_check_hostname(self, check_hostname): + self._ssl_context.check_hostname = check_hostname + return self + + def with_ciphers(self, ciphers): + if ciphers is not None: + self._ssl_context.set_ciphers(ciphers) # set_ciphers() does not allow None input. Use default (do nothing) if None + return self + + def with_alpn_protocols(self, alpn_protocols): + self._ssl_context.set_alpn_protocols(alpn_protocols) + return self + + def build(self): + return self._ssl_context diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/connection/cores.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/connection/cores.py new file mode 100644 index 0000000..df12470 --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/connection/cores.py @@ -0,0 +1,699 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +# This class implements the progressive backoff logic for auto-reconnect. +# It manages the reconnect wait time for the current reconnect, controling +# when to increase it and when to reset it. + + +import re +import sys +import ssl +import errno +import struct +import socket +import base64 +import time +import threading +import logging +import os +from datetime import datetime +import hashlib +import hmac +from AWSIoTPythonSDK.exception.AWSIoTExceptions import ClientError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import wssNoKeyInEnvironmentError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import wssHandShakeError +from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC +try: + from urllib.parse import quote # Python 3+ +except ImportError: + from urllib import quote +# INI config file handling +try: + from configparser import ConfigParser # Python 3+ + from configparser import NoOptionError + from configparser import NoSectionError +except ImportError: + from ConfigParser import ConfigParser + from ConfigParser import NoOptionError + from ConfigParser import NoSectionError + + +class ProgressiveBackOffCore: + # Logger + _logger = logging.getLogger(__name__) + + def __init__(self, srcBaseReconnectTimeSecond=1, srcMaximumReconnectTimeSecond=32, srcMinimumConnectTimeSecond=20): + # The base reconnection time in seconds, default 1 + self._baseReconnectTimeSecond = srcBaseReconnectTimeSecond + # The maximum reconnection time in seconds, default 32 + self._maximumReconnectTimeSecond = srcMaximumReconnectTimeSecond + # The minimum time in milliseconds that a connection must be maintained in order to be considered stable + # Default 20 + self._minimumConnectTimeSecond = srcMinimumConnectTimeSecond + # Current backOff time in seconds, init to equal to 0 + self._currentBackoffTimeSecond = 1 + # Handler for timer + self._resetBackoffTimer = None + + # For custom progressiveBackoff timing configuration + def configTime(self, srcBaseReconnectTimeSecond, srcMaximumReconnectTimeSecond, srcMinimumConnectTimeSecond): + if srcBaseReconnectTimeSecond < 0 or srcMaximumReconnectTimeSecond < 0 or srcMinimumConnectTimeSecond < 0: + self._logger.error("init: Negative time configuration detected.") + raise ValueError("Negative time configuration detected.") + if srcBaseReconnectTimeSecond >= srcMinimumConnectTimeSecond: + self._logger.error("init: Min connect time should be bigger than base reconnect time.") + raise ValueError("Min connect time should be bigger than base reconnect time.") + self._baseReconnectTimeSecond = srcBaseReconnectTimeSecond + self._maximumReconnectTimeSecond = srcMaximumReconnectTimeSecond + self._minimumConnectTimeSecond = srcMinimumConnectTimeSecond + self._currentBackoffTimeSecond = 1 + + # Block the reconnect logic for _currentBackoffTimeSecond + # Update the currentBackoffTimeSecond for the next reconnect + # Cancel the in-waiting timer for resetting backOff time + # This should get called only when a disconnect/reconnect happens + def backOff(self): + self._logger.debug("backOff: current backoff time is: " + str(self._currentBackoffTimeSecond) + " sec.") + if self._resetBackoffTimer is not None: + # Cancel the timer + self._resetBackoffTimer.cancel() + # Block the reconnect logic + time.sleep(self._currentBackoffTimeSecond) + # Update the backoff time + if self._currentBackoffTimeSecond == 0: + # This is the first attempt to connect, set it to base + self._currentBackoffTimeSecond = self._baseReconnectTimeSecond + else: + # r_cur = min(2^n*r_base, r_max) + self._currentBackoffTimeSecond = min(self._maximumReconnectTimeSecond, self._currentBackoffTimeSecond * 2) + + # Start the timer for resetting _currentBackoffTimeSecond + # Will be cancelled upon calling backOff + def startStableConnectionTimer(self): + self._resetBackoffTimer = threading.Timer(self._minimumConnectTimeSecond, + self._connectionStableThenResetBackoffTime) + self._resetBackoffTimer.start() + + def stopStableConnectionTimer(self): + if self._resetBackoffTimer is not None: + # Cancel the timer + self._resetBackoffTimer.cancel() + + # Timer callback to reset _currentBackoffTimeSecond + # If the connection is stable for longer than _minimumConnectTimeSecond, + # reset the currentBackoffTimeSecond to _baseReconnectTimeSecond + def _connectionStableThenResetBackoffTime(self): + self._logger.debug( + "stableConnection: Resetting the backoff time to: " + str(self._baseReconnectTimeSecond) + " sec.") + self._currentBackoffTimeSecond = self._baseReconnectTimeSecond + + +class SigV4Core: + + _logger = logging.getLogger(__name__) + + def __init__(self): + self._aws_access_key_id = "" + self._aws_secret_access_key = "" + self._aws_session_token = "" + self._credentialConfigFilePath = "~/.aws/credentials" + + def setIAMCredentials(self, srcAWSAccessKeyID, srcAWSSecretAccessKey, srcAWSSessionToken): + self._aws_access_key_id = srcAWSAccessKeyID + self._aws_secret_access_key = srcAWSSecretAccessKey + self._aws_session_token = srcAWSSessionToken + + def _createAmazonDate(self): + # Returned as a unicode string in Py3.x + amazonDate = [] + currentTime = datetime.utcnow() + YMDHMS = currentTime.strftime('%Y%m%dT%H%M%SZ') + YMD = YMDHMS[0:YMDHMS.index('T')] + amazonDate.append(YMD) + amazonDate.append(YMDHMS) + return amazonDate + + def _sign(self, key, message): + # Returned as a utf-8 byte string in Py3.x + return hmac.new(key, message.encode('utf-8'), hashlib.sha256).digest() + + def _getSignatureKey(self, key, dateStamp, regionName, serviceName): + # Returned as a utf-8 byte string in Py3.x + kDate = self._sign(('AWS4' + key).encode('utf-8'), dateStamp) + kRegion = self._sign(kDate, regionName) + kService = self._sign(kRegion, serviceName) + kSigning = self._sign(kService, 'aws4_request') + return kSigning + + def _checkIAMCredentials(self): + # Check custom config + ret = self._checkKeyInCustomConfig() + # Check environment variables + if not ret: + ret = self._checkKeyInEnv() + # Check files + if not ret: + ret = self._checkKeyInFiles() + # All credentials returned as unicode strings in Py3.x + return ret + + def _checkKeyInEnv(self): + ret = dict() + self._aws_access_key_id = os.environ.get('AWS_ACCESS_KEY_ID') + self._aws_secret_access_key = os.environ.get('AWS_SECRET_ACCESS_KEY') + self._aws_session_token = os.environ.get('AWS_SESSION_TOKEN') + if self._aws_access_key_id is not None and self._aws_secret_access_key is not None: + ret["aws_access_key_id"] = self._aws_access_key_id + ret["aws_secret_access_key"] = self._aws_secret_access_key + # We do not necessarily need session token... + if self._aws_session_token is not None: + ret["aws_session_token"] = self._aws_session_token + self._logger.debug("IAM credentials from env var.") + return ret + + def _checkKeyInINIDefault(self, srcConfigParser, sectionName): + ret = dict() + # Check aws_access_key_id and aws_secret_access_key + try: + ret["aws_access_key_id"] = srcConfigParser.get(sectionName, "aws_access_key_id") + ret["aws_secret_access_key"] = srcConfigParser.get(sectionName, "aws_secret_access_key") + except NoOptionError: + self._logger.warn("Cannot find IAM keyID/secretKey in credential file.") + # We do not continue searching if we cannot even get IAM id/secret right + if len(ret) == 2: + # Check aws_session_token, optional + try: + ret["aws_session_token"] = srcConfigParser.get(sectionName, "aws_session_token") + except NoOptionError: + self._logger.debug("No AWS Session Token found.") + return ret + + def _checkKeyInFiles(self): + credentialFile = None + credentialConfig = None + ret = dict() + # Should be compatible with aws cli default credential configuration + # *NIX/Windows + try: + # See if we get the file + credentialConfig = ConfigParser() + credentialFilePath = os.path.expanduser(self._credentialConfigFilePath) # Is it compatible with windows? \/ + credentialConfig.read(credentialFilePath) + # Now we have the file, start looking for credentials... + # 'default' section + ret = self._checkKeyInINIDefault(credentialConfig, "default") + if not ret: + # 'DEFAULT' section + ret = self._checkKeyInINIDefault(credentialConfig, "DEFAULT") + self._logger.debug("IAM credentials from file.") + except IOError: + self._logger.debug("No IAM credential configuration file in " + credentialFilePath) + except NoSectionError: + self._logger.error("Cannot find IAM 'default' section.") + return ret + + def _checkKeyInCustomConfig(self): + ret = dict() + if self._aws_access_key_id != "" and self._aws_secret_access_key != "": + ret["aws_access_key_id"] = self._aws_access_key_id + ret["aws_secret_access_key"] = self._aws_secret_access_key + # We do not necessarily need session token... + if self._aws_session_token != "": + ret["aws_session_token"] = self._aws_session_token + self._logger.debug("IAM credentials from custom config.") + return ret + + def createWebsocketEndpoint(self, host, port, region, method, awsServiceName, path): + # Return the endpoint as unicode string in 3.x + # Gather all the facts + amazonDate = self._createAmazonDate() + amazonDateSimple = amazonDate[0] # Unicode in 3.x + amazonDateComplex = amazonDate[1] # Unicode in 3.x + allKeys = self._checkIAMCredentials() # Unicode in 3.x + if not self._hasCredentialsNecessaryForWebsocket(allKeys): + raise wssNoKeyInEnvironmentError() + else: + # Because of self._hasCredentialsNecessaryForWebsocket(...), keyID and secretKey should not be None from here + keyID = allKeys["aws_access_key_id"] + secretKey = allKeys["aws_secret_access_key"] + # amazonDateSimple and amazonDateComplex are guaranteed not to be None + queryParameters = "X-Amz-Algorithm=AWS4-HMAC-SHA256" + \ + "&X-Amz-Credential=" + keyID + "%2F" + amazonDateSimple + "%2F" + region + "%2F" + awsServiceName + "%2Faws4_request" + \ + "&X-Amz-Date=" + amazonDateComplex + \ + "&X-Amz-Expires=86400" + \ + "&X-Amz-SignedHeaders=host" # Unicode in 3.x + hashedPayload = hashlib.sha256(str("").encode('utf-8')).hexdigest() # Unicode in 3.x + # Create the string to sign + signedHeaders = "host" + canonicalHeaders = "host:" + host + "\n" + canonicalRequest = method + "\n" + path + "\n" + queryParameters + "\n" + canonicalHeaders + "\n" + signedHeaders + "\n" + hashedPayload # Unicode in 3.x + hashedCanonicalRequest = hashlib.sha256(str(canonicalRequest).encode('utf-8')).hexdigest() # Unicoede in 3.x + stringToSign = "AWS4-HMAC-SHA256\n" + amazonDateComplex + "\n" + amazonDateSimple + "/" + region + "/" + awsServiceName + "/aws4_request\n" + hashedCanonicalRequest # Unicode in 3.x + # Sign it + signingKey = self._getSignatureKey(secretKey, amazonDateSimple, region, awsServiceName) + signature = hmac.new(signingKey, (stringToSign).encode("utf-8"), hashlib.sha256).hexdigest() + # generate url + url = "wss://" + host + ":" + str(port) + path + '?' + queryParameters + "&X-Amz-Signature=" + signature + # See if we have STS token, if we do, add it + awsSessionTokenCandidate = allKeys.get("aws_session_token") + if awsSessionTokenCandidate is not None and len(awsSessionTokenCandidate) != 0: + aws_session_token = allKeys["aws_session_token"] + url += "&X-Amz-Security-Token=" + quote(aws_session_token.encode("utf-8")) # Unicode in 3.x + self._logger.debug("createWebsocketEndpoint: Websocket URL: " + url) + return url + + def _hasCredentialsNecessaryForWebsocket(self, allKeys): + awsAccessKeyIdCandidate = allKeys.get("aws_access_key_id") + awsSecretAccessKeyCandidate = allKeys.get("aws_secret_access_key") + # None value is NOT considered as valid entries + validEntries = awsAccessKeyIdCandidate is not None and awsAccessKeyIdCandidate is not None + if validEntries: + # Empty value is NOT considered as valid entries + validEntries &= (len(awsAccessKeyIdCandidate) != 0 and len(awsSecretAccessKeyCandidate) != 0) + return validEntries + + +# This is an internal class that buffers the incoming bytes into an +# internal buffer until it gets the full desired length of bytes. +# At that time, this bufferedReader will be reset. +# *Error handling: +# For retry errors (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE, EAGAIN), +# leave them to the paho _packet_read for further handling (ignored and try +# again when data is available. +# For other errors, leave them to the paho _packet_read for error reporting. + + +class _BufferedReader: + _sslSocket = None + _internalBuffer = None + _remainedLength = -1 + _bufferingInProgress = False + + def __init__(self, sslSocket): + self._sslSocket = sslSocket + self._internalBuffer = bytearray() + self._bufferingInProgress = False + + def _reset(self): + self._internalBuffer = bytearray() + self._remainedLength = -1 + self._bufferingInProgress = False + + def read(self, numberOfBytesToBeBuffered): + if not self._bufferingInProgress: # If last read is completed... + self._remainedLength = numberOfBytesToBeBuffered + self._bufferingInProgress = True # Now we start buffering a new length of bytes + + while self._remainedLength > 0: # Read in a loop, always try to read in the remained length + # If the data is temporarily not available, socket.error will be raised and catched by paho + dataChunk = self._sslSocket.read(self._remainedLength) + # There is a chance where the server terminates the connection without closing the socket. + # If that happens, let's raise an exception and enter the reconnect flow. + if not dataChunk: + raise socket.error(errno.ECONNABORTED, 0) + self._internalBuffer.extend(dataChunk) # Buffer the data + self._remainedLength -= len(dataChunk) # Update the remained length + + # The requested length of bytes is buffered, recover the context and return it + # Otherwise error should be raised + ret = self._internalBuffer + self._reset() + return ret # This should always be bytearray + + +# This is the internal class that sends requested data out chunk by chunk according +# to the availablity of the socket write operation. If the requested bytes of data +# (after encoding) needs to be sent out in separate socket write operations (most +# probably be interrupted by the error socket.error (errno = ssl.SSL_ERROR_WANT_WRITE).) +# , the write pointer is stored to ensure that the continued bytes will be sent next +# time this function gets called. +# *Error handling: +# For retry errors (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE, EAGAIN), +# leave them to the paho _packet_read for further handling (ignored and try +# again when data is available. +# For other errors, leave them to the paho _packet_read for error reporting. + + +class _BufferedWriter: + _sslSocket = None + _internalBuffer = None + _writingInProgress = False + _requestedDataLength = -1 + + def __init__(self, sslSocket): + self._sslSocket = sslSocket + self._internalBuffer = bytearray() + self._writingInProgress = False + self._requestedDataLength = -1 + + def _reset(self): + self._internalBuffer = bytearray() + self._writingInProgress = False + self._requestedDataLength = -1 + + # Input data for this function needs to be an encoded wss frame + # Always request for packet[pos=0:] (raw MQTT data) + def write(self, encodedData, payloadLength): + # encodedData should always be bytearray + # Check if we have a frame that is partially sent + if not self._writingInProgress: + self._internalBuffer = encodedData + self._writingInProgress = True + self._requestedDataLength = payloadLength + # Now, write as much as we can + lengthWritten = self._sslSocket.write(self._internalBuffer) + self._internalBuffer = self._internalBuffer[lengthWritten:] + # This MQTT packet has been sent out in a wss frame, completely + if len(self._internalBuffer) == 0: + ret = self._requestedDataLength + self._reset() + return ret + # This socket write is half-baked... + else: + return 0 # Ensure that the 'pos' inside the MQTT packet never moves since we have not finished the transmission of this encoded frame + + +class SecuredWebSocketCore: + # Websocket Constants + _OP_CONTINUATION = 0x0 + _OP_TEXT = 0x1 + _OP_BINARY = 0x2 + _OP_CONNECTION_CLOSE = 0x8 + _OP_PING = 0x9 + _OP_PONG = 0xa + # Websocket Connect Status + _WebsocketConnectInit = -1 + _WebsocketDisconnected = 1 + + _logger = logging.getLogger(__name__) + + def __init__(self, socket, hostAddress, portNumber, AWSAccessKeyID="", AWSSecretAccessKey="", AWSSessionToken=""): + self._connectStatus = self._WebsocketConnectInit + # Handlers + self._sslSocket = socket + self._sigV4Handler = self._createSigV4Core() + self._sigV4Handler.setIAMCredentials(AWSAccessKeyID, AWSSecretAccessKey, AWSSessionToken) + # Endpoint Info + self._hostAddress = hostAddress + self._portNumber = portNumber + # Section Flags + self._hasOpByte = False + self._hasPayloadLengthFirst = False + self._hasPayloadLengthExtended = False + self._hasMaskKey = False + self._hasPayload = False + # Properties for current websocket frame + self._isFIN = False + self._RSVBits = None + self._opCode = None + self._needMaskKey = False + self._payloadLengthBytesLength = 1 + self._payloadLength = 0 + self._maskKey = None + self._payloadDataBuffer = bytearray() # Once the whole wss connection is lost, there is no need to keep the buffered payload + try: + self._handShake(hostAddress, portNumber) + except wssNoKeyInEnvironmentError: # Handle SigV4 signing and websocket handshaking errors + raise ValueError("No Access Key/KeyID Error") + except wssHandShakeError: + raise ValueError("Websocket Handshake Error") + except ClientError as e: + raise ValueError(e.message) + # Now we have a socket with secured websocket... + self._bufferedReader = _BufferedReader(self._sslSocket) + self._bufferedWriter = _BufferedWriter(self._sslSocket) + + def _createSigV4Core(self): + return SigV4Core() + + def _generateMaskKey(self): + return bytearray(os.urandom(4)) + # os.urandom returns ascii str in 2.x, converted to bytearray + # os.urandom returns bytes in 3.x, converted to bytearray + + def _reset(self): # Reset the context for wss frame reception + # Control info + self._hasOpByte = False + self._hasPayloadLengthFirst = False + self._hasPayloadLengthExtended = False + self._hasMaskKey = False + self._hasPayload = False + # Frame Info + self._isFIN = False + self._RSVBits = None + self._opCode = None + self._needMaskKey = False + self._payloadLengthBytesLength = 1 + self._payloadLength = 0 + self._maskKey = None + # Never reset the payloadData since we might have fragmented MQTT data from the pervious frame + + def _generateWSSKey(self): + return base64.b64encode(os.urandom(128)) # Bytes + + def _verifyWSSResponse(self, response, clientKey): + # Check if it is a 101 response + rawResponse = response.strip().lower() + if b"101 switching protocols" not in rawResponse or b"upgrade: websocket" not in rawResponse or b"connection: upgrade" not in rawResponse: + return False + # Parse out the sec-websocket-accept + WSSAcceptKeyIndex = response.strip().index(b"sec-websocket-accept: ") + len(b"sec-websocket-accept: ") + rawSecWebSocketAccept = response.strip()[WSSAcceptKeyIndex:].split(b"\r\n")[0].strip() + # Verify the WSSAcceptKey + return self._verifyWSSAcceptKey(rawSecWebSocketAccept, clientKey) + + def _verifyWSSAcceptKey(self, srcAcceptKey, clientKey): + GUID = b"258EAFA5-E914-47DA-95CA-C5AB0DC85B11" + verifyServerAcceptKey = base64.b64encode((hashlib.sha1(clientKey + GUID)).digest()) # Bytes + return srcAcceptKey == verifyServerAcceptKey + + def _handShake(self, hostAddress, portNumber): + CRLF = "\r\n" + IOT_ENDPOINT_PATTERN = r"^[0-9a-zA-Z]+(\.ats|-ats)?\.iot\.(.*)\.amazonaws\..*" + matched = re.compile(IOT_ENDPOINT_PATTERN, re.IGNORECASE).match(hostAddress) + if not matched: + raise ClientError("Invalid endpoint pattern for wss: %s" % hostAddress) + region = matched.group(2) + signedURL = self._sigV4Handler.createWebsocketEndpoint(hostAddress, portNumber, region, "GET", "iotdata", "/mqtt") + # Now we got a signedURL + path = signedURL[signedURL.index("/mqtt"):] + # Assemble HTTP request headers + Method = "GET " + path + " HTTP/1.1" + CRLF + Host = "Host: " + hostAddress + CRLF + Connection = "Connection: " + "Upgrade" + CRLF + Upgrade = "Upgrade: " + "websocket" + CRLF + secWebSocketVersion = "Sec-WebSocket-Version: " + "13" + CRLF + rawSecWebSocketKey = self._generateWSSKey() # Bytes + secWebSocketKey = "sec-websocket-key: " + rawSecWebSocketKey.decode('utf-8') + CRLF # Should be randomly generated... + secWebSocketProtocol = "Sec-WebSocket-Protocol: " + "mqttv3.1" + CRLF + secWebSocketExtensions = "Sec-WebSocket-Extensions: " + "permessage-deflate; client_max_window_bits" + CRLF + # Send the HTTP request + # Ensure that we are sending bytes, not by any chance unicode string + handshakeBytes = Method + Host + Connection + Upgrade + secWebSocketVersion + secWebSocketProtocol + secWebSocketExtensions + secWebSocketKey + CRLF + handshakeBytes = handshakeBytes.encode('utf-8') + self._sslSocket.write(handshakeBytes) + # Read it back (Non-blocking socket) + timeStart = time.time() + wssHandshakeResponse = bytearray() + while len(wssHandshakeResponse) == 0: + try: + wssHandshakeResponse += self._sslSocket.read(1024) # Response is always less than 1024 bytes + except socket.error as err: + if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE: + if time.time() - timeStart > self._getTimeoutSec(): + raise err # We make sure that reconnect gets retried in Paho upon a wss reconnect response timeout + else: + raise err + # Verify response + # Now both wssHandshakeResponse and rawSecWebSocketKey are byte strings + if not self._verifyWSSResponse(wssHandshakeResponse, rawSecWebSocketKey): + raise wssHandShakeError() + else: + pass + + def _getTimeoutSec(self): + return DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC + + # Used to create a single wss frame + # Assume that the maximum length of a MQTT packet never exceeds the maximum length + # for a wss frame. Therefore, the FIN bit for the encoded frame will always be 1. + # Frames are encoded as BINARY frames. + def _encodeFrame(self, rawPayload, opCode, masked=1): + ret = bytearray() + # Op byte + opByte = 0x80 | opCode # Always a FIN, no RSV bits + ret.append(opByte) + # Payload Length bytes + maskBit = masked + payloadLength = len(rawPayload) + if payloadLength <= 125: + ret.append((maskBit << 7) | payloadLength) + elif payloadLength <= 0xffff: # 16-bit unsigned int + ret.append((maskBit << 7) | 126) + ret.extend(struct.pack("!H", payloadLength)) + elif payloadLength <= 0x7fffffffffffffff: # 64-bit unsigned int (most significant bit must be 0) + ret.append((maskBit << 7) | 127) + ret.extend(struct.pack("!Q", payloadLength)) + else: # Overflow + raise ValueError("Exceeds the maximum number of bytes for a single websocket frame.") + if maskBit == 1: + # Mask key bytes + maskKey = self._generateMaskKey() + ret.extend(maskKey) + # Mask the payload + payloadBytes = bytearray(rawPayload) + if maskBit == 1: + for i in range(0, payloadLength): + payloadBytes[i] ^= maskKey[i % 4] + ret.extend(payloadBytes) + # Return the assembled wss frame + return ret + + # Used for the wss client to close a wss connection + # Create and send a masked wss closing frame + def _closeWssConnection(self): + # Frames sent from client to server must be masked + self._sslSocket.write(self._encodeFrame(b"", self._OP_CONNECTION_CLOSE, masked=1)) + + # Used for the wss client to respond to a wss PING from server + # Create and send a masked PONG frame + def _sendPONG(self): + # Frames sent from client to server must be masked + self._sslSocket.write(self._encodeFrame(b"", self._OP_PONG, masked=1)) + + # Override sslSocket read. Always read from the wss internal payload buffer, which + # contains the masked MQTT packet. This read will decode ONE wss frame every time + # and load in the payload for MQTT _packet_read. At any time, MQTT _packet_read + # should be able to read a complete MQTT packet from the payload (buffered per wss + # frame payload). If the MQTT packet is break into separate wss frames, different + # chunks will be buffered in separate frames and MQTT _packet_read will not be able + # to collect a complete MQTT packet to operate on until the necessary payload is + # fully buffered. + # If the requested number of bytes are not available, SSL_ERROR_WANT_READ will be + # raised to trigger another call of _packet_read when the data is available again. + def read(self, numberOfBytes): + # Check if we have enough data for paho + # _payloadDataBuffer will not be empty ony when the payload of a new wss frame + # has been unmasked. + if len(self._payloadDataBuffer) >= numberOfBytes: + ret = self._payloadDataBuffer[0:numberOfBytes] + self._payloadDataBuffer = self._payloadDataBuffer[numberOfBytes:] + # struct.unpack(fmt, string) # Py2.x + # struct.unpack(fmt, buffer) # Py3.x + # Here ret is always in bytes (buffer interface) + if sys.version_info[0] < 3: # Py2.x + ret = str(ret) + return ret + # Emmm, We don't. Try to buffer from the socket (It's a new wss frame). + if not self._hasOpByte: # Check if we need to buffer OpByte + opByte = self._bufferedReader.read(1) + self._isFIN = (opByte[0] & 0x80) == 0x80 + self._RSVBits = (opByte[0] & 0x70) + self._opCode = (opByte[0] & 0x0f) + self._hasOpByte = True # Finished buffering opByte + # Check if any of the RSV bits are set, if so, close the connection + # since client never sends negotiated extensions + if self._RSVBits != 0x0: + self._closeWssConnection() + self._connectStatus = self._WebsocketDisconnected + self._payloadDataBuffer = bytearray() + raise socket.error(ssl.SSL_ERROR_WANT_READ, "RSV bits set with NO negotiated extensions.") + if not self._hasPayloadLengthFirst: # Check if we need to buffer First Payload Length byte + payloadLengthFirst = self._bufferedReader.read(1) + self._hasPayloadLengthFirst = True # Finished buffering first byte of payload length + self._needMaskKey = (payloadLengthFirst[0] & 0x80) == 0x80 + payloadLengthFirstByteArray = bytearray() + payloadLengthFirstByteArray.extend(payloadLengthFirst) + self._payloadLength = (payloadLengthFirstByteArray[0] & 0x7f) + + if self._payloadLength == 126: + self._payloadLengthBytesLength = 2 + self._hasPayloadLengthExtended = False # Force to buffer the extended + elif self._payloadLength == 127: + self._payloadLengthBytesLength = 8 + self._hasPayloadLengthExtended = False # Force to buffer the extended + else: # _payloadLength <= 125: + self._hasPayloadLengthExtended = True # No need to buffer extended payload length + if not self._hasPayloadLengthExtended: # Check if we need to buffer Extended Payload Length bytes + payloadLengthExtended = self._bufferedReader.read(self._payloadLengthBytesLength) + self._hasPayloadLengthExtended = True + if sys.version_info[0] < 3: + payloadLengthExtended = str(payloadLengthExtended) + if self._payloadLengthBytesLength == 2: + self._payloadLength = struct.unpack("!H", payloadLengthExtended)[0] + else: # _payloadLengthBytesLength == 8 + self._payloadLength = struct.unpack("!Q", payloadLengthExtended)[0] + + if self._needMaskKey: # Response from server is masked, close the connection + self._closeWssConnection() + self._connectStatus = self._WebsocketDisconnected + self._payloadDataBuffer = bytearray() + raise socket.error(ssl.SSL_ERROR_WANT_READ, "Server response masked, closing connection and try again.") + + if not self._hasPayload: # Check if we need to buffer the payload + payloadForThisFrame = self._bufferedReader.read(self._payloadLength) + self._hasPayload = True + # Client side should never received a masked packet from the server side + # Unmask it as needed + #if self._needMaskKey: + # for i in range(0, self._payloadLength): + # payloadForThisFrame[i] ^= self._maskKey[i % 4] + # Append it to the internal payload buffer + self._payloadDataBuffer.extend(payloadForThisFrame) + # Now we have the complete wss frame, reset the context + # Check to see if it is a wss closing frame + if self._opCode == self._OP_CONNECTION_CLOSE: + self._connectStatus = self._WebsocketDisconnected + self._payloadDataBuffer = bytearray() # Ensure that once the wss closing frame comes, we have nothing to read and start all over again + # Check to see if it is a wss PING frame + if self._opCode == self._OP_PING: + self._sendPONG() # Nothing more to do here, if the transmission of the last wssMQTT packet is not finished, it will continue + self._reset() + # Check again if we have enough data for paho + if len(self._payloadDataBuffer) >= numberOfBytes: + ret = self._payloadDataBuffer[0:numberOfBytes] + self._payloadDataBuffer = self._payloadDataBuffer[numberOfBytes:] + # struct.unpack(fmt, string) # Py2.x + # struct.unpack(fmt, buffer) # Py3.x + # Here ret is always in bytes (buffer interface) + if sys.version_info[0] < 3: # Py2.x + ret = str(ret) + return ret + else: # Fragmented MQTT packets in separate wss frames + raise socket.error(ssl.SSL_ERROR_WANT_READ, "Not a complete MQTT packet payload within this wss frame.") + + def write(self, bytesToBeSent): + # When there is a disconnection, select will report a TypeError which triggers the reconnect. + # In reconnect, Paho will set the socket object (mocked by wss) to None, blocking other ops + # before a connection is re-established. + # This 'low-level' socket write op should always be able to write to plain socket. + # Error reporting is performed by Python socket itself. + # Wss closing frame handling is performed in the wss read. + return self._bufferedWriter.write(self._encodeFrame(bytesToBeSent, self._OP_BINARY, 1), len(bytesToBeSent)) + + def close(self): + if self._sslSocket is not None: + self._sslSocket.close() + self._sslSocket = None + + def getpeercert(self): + return self._sslSocket.getpeercert() + + def getSSLSocket(self): + if self._connectStatus != self._WebsocketDisconnected: + return self._sslSocket + else: + return None # Leave the sslSocket to Paho to close it. (_ssl.close() -> wssCore.close()) diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/clients.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/clients.py new file mode 100644 index 0000000..bb670f7 --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/clients.py @@ -0,0 +1,244 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import ssl +import logging +from threading import Lock +from numbers import Number +import AWSIoTPythonSDK.core.protocol.paho.client as mqtt +from AWSIoTPythonSDK.core.protocol.paho.client import MQTT_ERR_SUCCESS +from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids + + +class ClientStatus(object): + + IDLE = 0 + CONNECT = 1 + RESUBSCRIBE = 2 + DRAINING = 3 + STABLE = 4 + USER_DISCONNECT = 5 + ABNORMAL_DISCONNECT = 6 + + +class ClientStatusContainer(object): + + def __init__(self): + self._status = ClientStatus.IDLE + + def get_status(self): + return self._status + + def set_status(self, status): + if ClientStatus.USER_DISCONNECT == self._status: # If user requests to disconnect, no status updates other than user connect + if ClientStatus.CONNECT == status: + self._status = status + else: + self._status = status + + +class InternalAsyncMqttClient(object): + + _logger = logging.getLogger(__name__) + + def __init__(self, client_id, clean_session, protocol, use_wss): + self._paho_client = self._create_paho_client(client_id, clean_session, None, protocol, use_wss) + self._use_wss = use_wss + self._event_callback_map_lock = Lock() + self._event_callback_map = dict() + + def _create_paho_client(self, client_id, clean_session, user_data, protocol, use_wss): + self._logger.debug("Initializing MQTT layer...") + return mqtt.Client(client_id, clean_session, user_data, protocol, use_wss) + + # TODO: Merge credentials providers configuration into one + def set_cert_credentials_provider(self, cert_credentials_provider): + # History issue from Yun SDK where AR9331 embedded Linux only have Python 2.7.3 + # pre-installed. In this version, TLSv1_2 is not even an option. + # SSLv23 is a work-around which selects the highest TLS version between the client + # and service. If user installs opensslv1.0.1+, this option will work fine for Mutual + # Auth. + # Note that we cannot force TLSv1.2 for Mutual Auth. in Python 2.7.3 and TLS support + # in Python only starts from Python2.7. + # See also: https://docs.python.org/2/library/ssl.html#ssl.PROTOCOL_SSLv23 + if self._use_wss: + ca_path = cert_credentials_provider.get_ca_path() + self._paho_client.tls_set(ca_certs=ca_path, cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_SSLv23) + else: + ca_path = cert_credentials_provider.get_ca_path() + cert_path = cert_credentials_provider.get_cert_path() + key_path = cert_credentials_provider.get_key_path() + self._paho_client.tls_set(ca_certs=ca_path,certfile=cert_path, keyfile=key_path, + cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_SSLv23) + + def set_iam_credentials_provider(self, iam_credentials_provider): + self._paho_client.configIAMCredentials(iam_credentials_provider.get_access_key_id(), + iam_credentials_provider.get_secret_access_key(), + iam_credentials_provider.get_session_token()) + + def set_endpoint_provider(self, endpoint_provider): + self._endpoint_provider = endpoint_provider + + def configure_last_will(self, topic, payload, qos, retain=False): + self._paho_client.will_set(topic, payload, qos, retain) + + def configure_alpn_protocols(self, alpn_protocols): + self._paho_client.config_alpn_protocols(alpn_protocols) + + def clear_last_will(self): + self._paho_client.will_clear() + + def set_username_password(self, username, password=None): + self._paho_client.username_pw_set(username, password) + + def set_socket_factory(self, socket_factory): + self._paho_client.socket_factory_set(socket_factory) + + def configure_reconnect_back_off(self, base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec): + self._paho_client.setBackoffTiming(base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec) + + def connect(self, keep_alive_sec, ack_callback=None): + host = self._endpoint_provider.get_host() + port = self._endpoint_provider.get_port() + + with self._event_callback_map_lock: + self._logger.debug("Filling in fixed event callbacks: CONNACK, DISCONNECT, MESSAGE") + self._event_callback_map[FixedEventMids.CONNACK_MID] = self._create_combined_on_connect_callback(ack_callback) + self._event_callback_map[FixedEventMids.DISCONNECT_MID] = self._create_combined_on_disconnect_callback(None) + self._event_callback_map[FixedEventMids.MESSAGE_MID] = self._create_converted_on_message_callback() + + rc = self._paho_client.connect(host, port, keep_alive_sec) + if MQTT_ERR_SUCCESS == rc: + self.start_background_network_io() + + return rc + + def start_background_network_io(self): + self._logger.debug("Starting network I/O thread...") + self._paho_client.loop_start() + + def stop_background_network_io(self): + self._logger.debug("Stopping network I/O thread...") + self._paho_client.loop_stop() + + def disconnect(self, ack_callback=None): + with self._event_callback_map_lock: + rc = self._paho_client.disconnect() + if MQTT_ERR_SUCCESS == rc: + self._logger.debug("Filling in custom disconnect event callback...") + combined_on_disconnect_callback = self._create_combined_on_disconnect_callback(ack_callback) + self._event_callback_map[FixedEventMids.DISCONNECT_MID] = combined_on_disconnect_callback + return rc + + def _create_combined_on_connect_callback(self, ack_callback): + def combined_on_connect_callback(mid, data): + self.on_online() + if ack_callback: + ack_callback(mid, data) + return combined_on_connect_callback + + def _create_combined_on_disconnect_callback(self, ack_callback): + def combined_on_disconnect_callback(mid, data): + self.on_offline() + if ack_callback: + ack_callback(mid, data) + return combined_on_disconnect_callback + + def _create_converted_on_message_callback(self): + def converted_on_message_callback(mid, data): + self.on_message(data) + return converted_on_message_callback + + # For client online notification + def on_online(self): + pass + + # For client offline notification + def on_offline(self): + pass + + # For client message reception notification + def on_message(self, message): + pass + + def publish(self, topic, payload, qos, retain=False, ack_callback=None): + with self._event_callback_map_lock: + rc, mid = self._paho_client.publish(topic, payload, qos, retain) + if MQTT_ERR_SUCCESS == rc and qos > 0 and ack_callback: + self._logger.debug("Filling in custom puback (QoS>0) event callback...") + self._event_callback_map[mid] = ack_callback + return rc, mid + + def subscribe(self, topic, qos, ack_callback=None): + with self._event_callback_map_lock: + rc, mid = self._paho_client.subscribe(topic, qos) + if MQTT_ERR_SUCCESS == rc and ack_callback: + self._logger.debug("Filling in custom suback event callback...") + self._event_callback_map[mid] = ack_callback + return rc, mid + + def unsubscribe(self, topic, ack_callback=None): + with self._event_callback_map_lock: + rc, mid = self._paho_client.unsubscribe(topic) + if MQTT_ERR_SUCCESS == rc and ack_callback: + self._logger.debug("Filling in custom unsuback event callback...") + self._event_callback_map[mid] = ack_callback + return rc, mid + + def register_internal_event_callbacks(self, on_connect, on_disconnect, on_publish, on_subscribe, on_unsubscribe, on_message): + self._logger.debug("Registering internal event callbacks to MQTT layer...") + self._paho_client.on_connect = on_connect + self._paho_client.on_disconnect = on_disconnect + self._paho_client.on_publish = on_publish + self._paho_client.on_subscribe = on_subscribe + self._paho_client.on_unsubscribe = on_unsubscribe + self._paho_client.on_message = on_message + + def unregister_internal_event_callbacks(self): + self._logger.debug("Unregistering internal event callbacks from MQTT layer...") + self._paho_client.on_connect = None + self._paho_client.on_disconnect = None + self._paho_client.on_publish = None + self._paho_client.on_subscribe = None + self._paho_client.on_unsubscribe = None + self._paho_client.on_message = None + + def invoke_event_callback(self, mid, data=None): + with self._event_callback_map_lock: + event_callback = self._event_callback_map.get(mid) + # For invoking the event callback, we do not need to acquire the lock + if event_callback: + self._logger.debug("Invoking custom event callback...") + if data is not None: + event_callback(mid=mid, data=data) + else: + event_callback(mid=mid) + if isinstance(mid, Number): # Do NOT remove callbacks for CONNACK/DISCONNECT/MESSAGE + self._logger.debug("This custom event callback is for pub/sub/unsub, removing it after invocation...") + with self._event_callback_map_lock: + del self._event_callback_map[mid] + + def remove_event_callback(self, mid): + with self._event_callback_map_lock: + if mid in self._event_callback_map: + self._logger.debug("Removing custom event callback...") + del self._event_callback_map[mid] + + def clean_up_event_callbacks(self): + with self._event_callback_map_lock: + self._event_callback_map.clear() + + def get_event_callback_map(self): + return self._event_callback_map diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/defaults.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/defaults.py new file mode 100644 index 0000000..66817d3 --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/defaults.py @@ -0,0 +1,20 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC = 30 +DEFAULT_OPERATION_TIMEOUT_SEC = 5 +DEFAULT_DRAINING_INTERNAL_SEC = 0.5 +METRICS_PREFIX = "?SDK=Python&Version=" +ALPN_PROTCOLS = "x-amzn-mqtt-ca" \ No newline at end of file diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/events.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/events.py new file mode 100644 index 0000000..90f0b70 --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/events.py @@ -0,0 +1,29 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +class EventTypes(object): + CONNACK = 0 + DISCONNECT = 1 + PUBACK = 2 + SUBACK = 3 + UNSUBACK = 4 + MESSAGE = 5 + + +class FixedEventMids(object): + CONNACK_MID = "CONNECTED" + DISCONNECT_MID = "DISCONNECTED" + MESSAGE_MID = "MESSAGE" + QUEUED_MID = "QUEUED" diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/queues.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/queues.py new file mode 100644 index 0000000..77046a8 --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/queues.py @@ -0,0 +1,87 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import logging +from AWSIoTPythonSDK.core.util.enums import DropBehaviorTypes + + +class AppendResults(object): + APPEND_FAILURE_QUEUE_FULL = -1 + APPEND_FAILURE_QUEUE_DISABLED = -2 + APPEND_SUCCESS = 0 + + +class OfflineRequestQueue(list): + _logger = logging.getLogger(__name__) + + def __init__(self, max_size, drop_behavior=DropBehaviorTypes.DROP_NEWEST): + if not isinstance(max_size, int) or not isinstance(drop_behavior, int): + self._logger.error("init: MaximumSize/DropBehavior must be integer.") + raise TypeError("MaximumSize/DropBehavior must be integer.") + if drop_behavior != DropBehaviorTypes.DROP_OLDEST and drop_behavior != DropBehaviorTypes.DROP_NEWEST: + self._logger.error("init: Drop behavior not supported.") + raise ValueError("Drop behavior not supported.") + + list.__init__([]) + self._drop_behavior = drop_behavior + # When self._maximumSize > 0, queue is limited + # When self._maximumSize == 0, queue is disabled + # When self._maximumSize < 0. queue is infinite + self._max_size = max_size + + def _is_enabled(self): + return self._max_size != 0 + + def _need_drop_messages(self): + # Need to drop messages when: + # 1. Queue is limited and full + # 2. Queue is disabled + is_queue_full = len(self) >= self._max_size + is_queue_limited = self._max_size > 0 + is_queue_disabled = not self._is_enabled() + return (is_queue_full and is_queue_limited) or is_queue_disabled + + def set_behavior_drop_newest(self): + self._drop_behavior = DropBehaviorTypes.DROP_NEWEST + + def set_behavior_drop_oldest(self): + self._drop_behavior = DropBehaviorTypes.DROP_OLDEST + + # Override + # Append to a queue with a limited size. + # Return APPEND_SUCCESS if the append is successful + # Return APPEND_FAILURE_QUEUE_FULL if the append failed because the queue is full + # Return APPEND_FAILURE_QUEUE_DISABLED if the append failed because the queue is disabled + def append(self, data): + ret = AppendResults.APPEND_SUCCESS + if self._is_enabled(): + if self._need_drop_messages(): + # We should drop the newest + if DropBehaviorTypes.DROP_NEWEST == self._drop_behavior: + self._logger.warn("append: Full queue. Drop the newest: " + str(data)) + ret = AppendResults.APPEND_FAILURE_QUEUE_FULL + # We should drop the oldest + else: + current_oldest = super(OfflineRequestQueue, self).pop(0) + self._logger.warn("append: Full queue. Drop the oldest: " + str(current_oldest)) + super(OfflineRequestQueue, self).append(data) + ret = AppendResults.APPEND_FAILURE_QUEUE_FULL + else: + self._logger.debug("append: Add new element: " + str(data)) + super(OfflineRequestQueue, self).append(data) + else: + self._logger.debug("append: Queue is disabled. Drop the message: " + str(data)) + ret = AppendResults.APPEND_FAILURE_QUEUE_DISABLED + return ret diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/requests.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/requests.py new file mode 100644 index 0000000..bd2585d --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/requests.py @@ -0,0 +1,27 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +class RequestTypes(object): + CONNECT = 0 + DISCONNECT = 1 + PUBLISH = 2 + SUBSCRIBE = 3 + UNSUBSCRIBE = 4 + +class QueueableRequest(object): + + def __init__(self, type, data): + self.type = type + self.data = data # Can be a tuple diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/workers.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/workers.py new file mode 100644 index 0000000..e52db3f --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/internal/workers.py @@ -0,0 +1,296 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import time +import logging +from threading import Thread +from threading import Event +from AWSIoTPythonSDK.core.protocol.internal.events import EventTypes +from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids +from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatus +from AWSIoTPythonSDK.core.protocol.internal.queues import OfflineRequestQueue +from AWSIoTPythonSDK.core.protocol.internal.requests import RequestTypes +from AWSIoTPythonSDK.core.protocol.paho.client import topic_matches_sub +from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_DRAINING_INTERNAL_SEC + + +class EventProducer(object): + + _logger = logging.getLogger(__name__) + + def __init__(self, cv, event_queue): + self._cv = cv + self._event_queue = event_queue + + def on_connect(self, client, user_data, flags, rc): + self._add_to_queue(FixedEventMids.CONNACK_MID, EventTypes.CONNACK, rc) + self._logger.debug("Produced [connack] event") + + def on_disconnect(self, client, user_data, rc): + self._add_to_queue(FixedEventMids.DISCONNECT_MID, EventTypes.DISCONNECT, rc) + self._logger.debug("Produced [disconnect] event") + + def on_publish(self, client, user_data, mid): + self._add_to_queue(mid, EventTypes.PUBACK, None) + self._logger.debug("Produced [puback] event") + + def on_subscribe(self, client, user_data, mid, granted_qos): + self._add_to_queue(mid, EventTypes.SUBACK, granted_qos) + self._logger.debug("Produced [suback] event") + + def on_unsubscribe(self, client, user_data, mid): + self._add_to_queue(mid, EventTypes.UNSUBACK, None) + self._logger.debug("Produced [unsuback] event") + + def on_message(self, client, user_data, message): + self._add_to_queue(FixedEventMids.MESSAGE_MID, EventTypes.MESSAGE, message) + self._logger.debug("Produced [message] event") + + def _add_to_queue(self, mid, event_type, data): + with self._cv: + self._event_queue.put((mid, event_type, data)) + self._cv.notify() + + +class EventConsumer(object): + + MAX_DISPATCH_INTERNAL_SEC = 0.01 + _logger = logging.getLogger(__name__) + + def __init__(self, cv, event_queue, internal_async_client, + subscription_manager, offline_requests_manager, client_status): + self._cv = cv + self._event_queue = event_queue + self._internal_async_client = internal_async_client + self._subscription_manager = subscription_manager + self._offline_requests_manager = offline_requests_manager + self._client_status = client_status + self._is_running = False + self._draining_interval_sec = DEFAULT_DRAINING_INTERNAL_SEC + self._dispatch_methods = { + EventTypes.CONNACK : self._dispatch_connack, + EventTypes.DISCONNECT : self._dispatch_disconnect, + EventTypes.PUBACK : self._dispatch_puback, + EventTypes.SUBACK : self._dispatch_suback, + EventTypes.UNSUBACK : self._dispatch_unsuback, + EventTypes.MESSAGE : self._dispatch_message + } + self._offline_request_handlers = { + RequestTypes.PUBLISH : self._handle_offline_publish, + RequestTypes.SUBSCRIBE : self._handle_offline_subscribe, + RequestTypes.UNSUBSCRIBE : self._handle_offline_unsubscribe + } + self._stopper = Event() + + def update_offline_requests_manager(self, offline_requests_manager): + self._offline_requests_manager = offline_requests_manager + + def update_draining_interval_sec(self, draining_interval_sec): + self._draining_interval_sec = draining_interval_sec + + def get_draining_interval_sec(self): + return self._draining_interval_sec + + def is_running(self): + return self._is_running + + def start(self): + self._stopper.clear() + self._is_running = True + dispatch_events = Thread(target=self._dispatch) + dispatch_events.daemon = True + dispatch_events.start() + self._logger.debug("Event consuming thread started") + + def stop(self): + if self._is_running: + self._is_running = False + self._clean_up() + self._logger.debug("Event consuming thread stopped") + + def _clean_up(self): + self._logger.debug("Cleaning up before stopping event consuming") + with self._event_queue.mutex: + self._event_queue.queue.clear() + self._logger.debug("Event queue cleared") + self._internal_async_client.stop_background_network_io() + self._logger.debug("Network thread stopped") + self._internal_async_client.clean_up_event_callbacks() + self._logger.debug("Event callbacks cleared") + + def wait_until_it_stops(self, timeout_sec): + self._logger.debug("Waiting for event consumer to completely stop") + return self._stopper.wait(timeout=timeout_sec) + + def is_fully_stopped(self): + return self._stopper.is_set() + + def _dispatch(self): + while self._is_running: + with self._cv: + if self._event_queue.empty(): + self._cv.wait(self.MAX_DISPATCH_INTERNAL_SEC) + else: + while not self._event_queue.empty(): + self._dispatch_one() + self._stopper.set() + self._logger.debug("Exiting dispatching loop...") + + def _dispatch_one(self): + mid, event_type, data = self._event_queue.get() + if mid: + self._dispatch_methods[event_type](mid, data) + self._internal_async_client.invoke_event_callback(mid, data=data) + # We need to make sure disconnect event gets dispatched and then we stop the consumer + if self._need_to_stop_dispatching(mid): + self.stop() + + def _need_to_stop_dispatching(self, mid): + status = self._client_status.get_status() + return (ClientStatus.USER_DISCONNECT == status or ClientStatus.CONNECT == status) \ + and mid == FixedEventMids.DISCONNECT_MID + + def _dispatch_connack(self, mid, rc): + status = self._client_status.get_status() + self._logger.debug("Dispatching [connack] event") + if self._need_recover(): + if ClientStatus.STABLE != status: # To avoid multiple connack dispatching + self._logger.debug("Has recovery job") + clean_up_debt = Thread(target=self._clean_up_debt) + clean_up_debt.start() + else: + self._logger.debug("No need for recovery") + self._client_status.set_status(ClientStatus.STABLE) + + def _need_recover(self): + return self._subscription_manager.list_records() or self._offline_requests_manager.has_more() + + def _clean_up_debt(self): + self._handle_resubscribe() + self._handle_draining() + self._client_status.set_status(ClientStatus.STABLE) + + def _handle_resubscribe(self): + subscriptions = self._subscription_manager.list_records() + if subscriptions and not self._has_user_disconnect_request(): + self._logger.debug("Start resubscribing") + self._client_status.set_status(ClientStatus.RESUBSCRIBE) + for topic, (qos, message_callback, ack_callback) in subscriptions: + if self._has_user_disconnect_request(): + self._logger.debug("User disconnect detected") + break + self._internal_async_client.subscribe(topic, qos, ack_callback) + + def _handle_draining(self): + if self._offline_requests_manager.has_more() and not self._has_user_disconnect_request(): + self._logger.debug("Start draining") + self._client_status.set_status(ClientStatus.DRAINING) + while self._offline_requests_manager.has_more(): + if self._has_user_disconnect_request(): + self._logger.debug("User disconnect detected") + break + offline_request = self._offline_requests_manager.get_next() + if offline_request: + self._offline_request_handlers[offline_request.type](offline_request) + time.sleep(self._draining_interval_sec) + + def _has_user_disconnect_request(self): + return ClientStatus.USER_DISCONNECT == self._client_status.get_status() + + def _dispatch_disconnect(self, mid, rc): + self._logger.debug("Dispatching [disconnect] event") + status = self._client_status.get_status() + if ClientStatus.USER_DISCONNECT == status or ClientStatus.CONNECT == status: + pass + else: + self._client_status.set_status(ClientStatus.ABNORMAL_DISCONNECT) + + # For puback, suback and unsuback, ack callback invocation is handled in dispatch_one + # Do nothing in the event dispatching itself + def _dispatch_puback(self, mid, rc): + self._logger.debug("Dispatching [puback] event") + + def _dispatch_suback(self, mid, rc): + self._logger.debug("Dispatching [suback] event") + + def _dispatch_unsuback(self, mid, rc): + self._logger.debug("Dispatching [unsuback] event") + + def _dispatch_message(self, mid, message): + self._logger.debug("Dispatching [message] event") + subscriptions = self._subscription_manager.list_records() + if subscriptions: + for topic, (qos, message_callback, _) in subscriptions: + if topic_matches_sub(topic, message.topic) and message_callback: + message_callback(None, None, message) # message_callback(client, userdata, message) + + def _handle_offline_publish(self, request): + topic, payload, qos, retain = request.data + self._internal_async_client.publish(topic, payload, qos, retain) + self._logger.debug("Processed offline publish request") + + def _handle_offline_subscribe(self, request): + topic, qos, message_callback, ack_callback = request.data + self._subscription_manager.add_record(topic, qos, message_callback, ack_callback) + self._internal_async_client.subscribe(topic, qos, ack_callback) + self._logger.debug("Processed offline subscribe request") + + def _handle_offline_unsubscribe(self, request): + topic, ack_callback = request.data + self._subscription_manager.remove_record(topic) + self._internal_async_client.unsubscribe(topic, ack_callback) + self._logger.debug("Processed offline unsubscribe request") + + +class SubscriptionManager(object): + + _logger = logging.getLogger(__name__) + + def __init__(self): + self._subscription_map = dict() + + def add_record(self, topic, qos, message_callback, ack_callback): + self._logger.debug("Adding a new subscription record: %s qos: %d", topic, qos) + self._subscription_map[topic] = qos, message_callback, ack_callback # message_callback and/or ack_callback could be None + + def remove_record(self, topic): + self._logger.debug("Removing subscription record: %s", topic) + if self._subscription_map.get(topic): # Ignore topics that are never subscribed to + del self._subscription_map[topic] + else: + self._logger.warn("Removing attempt for non-exist subscription record: %s", topic) + + def list_records(self): + return list(self._subscription_map.items()) + + +class OfflineRequestsManager(object): + + _logger = logging.getLogger(__name__) + + def __init__(self, max_size, drop_behavior): + self._queue = OfflineRequestQueue(max_size, drop_behavior) + + def has_more(self): + return len(self._queue) > 0 + + def add_one(self, request): + return self._queue.append(request) + + def get_next(self): + if self.has_more(): + return self._queue.pop(0) + else: + return None diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/mqtt_core.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/mqtt_core.py new file mode 100644 index 0000000..e2f98fc --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/mqtt_core.py @@ -0,0 +1,373 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import AWSIoTPythonSDK +from AWSIoTPythonSDK.core.protocol.internal.clients import InternalAsyncMqttClient +from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatusContainer +from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatus +from AWSIoTPythonSDK.core.protocol.internal.workers import EventProducer +from AWSIoTPythonSDK.core.protocol.internal.workers import EventConsumer +from AWSIoTPythonSDK.core.protocol.internal.workers import SubscriptionManager +from AWSIoTPythonSDK.core.protocol.internal.workers import OfflineRequestsManager +from AWSIoTPythonSDK.core.protocol.internal.requests import RequestTypes +from AWSIoTPythonSDK.core.protocol.internal.requests import QueueableRequest +from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC +from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_OPERATION_TIMEOUT_SEC +from AWSIoTPythonSDK.core.protocol.internal.defaults import METRICS_PREFIX +from AWSIoTPythonSDK.core.protocol.internal.defaults import ALPN_PROTCOLS +from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids +from AWSIoTPythonSDK.core.protocol.paho.client import MQTT_ERR_SUCCESS +from AWSIoTPythonSDK.exception.AWSIoTExceptions import connectError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import connectTimeoutException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import disconnectError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import disconnectTimeoutException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishTimeoutException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishQueueFullException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishQueueDisabledException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeQueueFullException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeQueueDisabledException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeQueueFullException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeQueueDisabledException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeTimeoutException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeTimeoutException +from AWSIoTPythonSDK.core.protocol.internal.queues import AppendResults +from AWSIoTPythonSDK.core.util.enums import DropBehaviorTypes +from AWSIoTPythonSDK.core.protocol.paho.client import MQTTv31 +from threading import Condition +from threading import Event +import logging +import sys +if sys.version_info[0] < 3: + from Queue import Queue +else: + from queue import Queue + + +class MqttCore(object): + + _logger = logging.getLogger(__name__) + + def __init__(self, client_id, clean_session, protocol, use_wss): + self._use_wss = use_wss + self._username = "" + self._password = None + self._enable_metrics_collection = True + self._event_queue = Queue() + self._event_cv = Condition() + self._event_producer = EventProducer(self._event_cv, self._event_queue) + self._client_status = ClientStatusContainer() + self._internal_async_client = InternalAsyncMqttClient(client_id, clean_session, protocol, use_wss) + self._subscription_manager = SubscriptionManager() + self._offline_requests_manager = OfflineRequestsManager(-1, DropBehaviorTypes.DROP_NEWEST) # Infinite queue + self._event_consumer = EventConsumer(self._event_cv, + self._event_queue, + self._internal_async_client, + self._subscription_manager, + self._offline_requests_manager, + self._client_status) + self._connect_disconnect_timeout_sec = DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC + self._operation_timeout_sec = DEFAULT_OPERATION_TIMEOUT_SEC + self._init_offline_request_exceptions() + self._init_workers() + self._logger.info("MqttCore initialized") + self._logger.info("Client id: %s" % client_id) + self._logger.info("Protocol version: %s" % ("MQTTv3.1" if protocol == MQTTv31 else "MQTTv3.1.1")) + self._logger.info("Authentication type: %s" % ("SigV4 WebSocket" if use_wss else "TLSv1.2 certificate based Mutual Auth.")) + + def _init_offline_request_exceptions(self): + self._offline_request_queue_disabled_exceptions = { + RequestTypes.PUBLISH : publishQueueDisabledException(), + RequestTypes.SUBSCRIBE : subscribeQueueDisabledException(), + RequestTypes.UNSUBSCRIBE : unsubscribeQueueDisabledException() + } + self._offline_request_queue_full_exceptions = { + RequestTypes.PUBLISH : publishQueueFullException(), + RequestTypes.SUBSCRIBE : subscribeQueueFullException(), + RequestTypes.UNSUBSCRIBE : unsubscribeQueueFullException() + } + + def _init_workers(self): + self._internal_async_client.register_internal_event_callbacks(self._event_producer.on_connect, + self._event_producer.on_disconnect, + self._event_producer.on_publish, + self._event_producer.on_subscribe, + self._event_producer.on_unsubscribe, + self._event_producer.on_message) + + def _start_workers(self): + self._event_consumer.start() + + def use_wss(self): + return self._use_wss + + # Used for general message event reception + def on_message(self, message): + pass + + # Used for general online event notification + def on_online(self): + pass + + # Used for general offline event notification + def on_offline(self): + pass + + def configure_cert_credentials(self, cert_credentials_provider): + self._logger.info("Configuring certificates...") + self._internal_async_client.set_cert_credentials_provider(cert_credentials_provider) + + def configure_iam_credentials(self, iam_credentials_provider): + self._logger.info("Configuring custom IAM credentials...") + self._internal_async_client.set_iam_credentials_provider(iam_credentials_provider) + + def configure_endpoint(self, endpoint_provider): + self._logger.info("Configuring endpoint...") + self._internal_async_client.set_endpoint_provider(endpoint_provider) + + def configure_connect_disconnect_timeout_sec(self, connect_disconnect_timeout_sec): + self._logger.info("Configuring connect/disconnect time out: %f sec" % connect_disconnect_timeout_sec) + self._connect_disconnect_timeout_sec = connect_disconnect_timeout_sec + + def configure_operation_timeout_sec(self, operation_timeout_sec): + self._logger.info("Configuring MQTT operation time out: %f sec" % operation_timeout_sec) + self._operation_timeout_sec = operation_timeout_sec + + def configure_reconnect_back_off(self, base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec): + self._logger.info("Configuring reconnect back off timing...") + self._logger.info("Base quiet time: %f sec" % base_reconnect_quiet_sec) + self._logger.info("Max quiet time: %f sec" % max_reconnect_quiet_sec) + self._logger.info("Stable connection time: %f sec" % stable_connection_sec) + self._internal_async_client.configure_reconnect_back_off(base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec) + + def configure_alpn_protocols(self): + self._logger.info("Configuring alpn protocols...") + self._internal_async_client.configure_alpn_protocols([ALPN_PROTCOLS]) + + def configure_last_will(self, topic, payload, qos, retain=False): + self._logger.info("Configuring last will...") + self._internal_async_client.configure_last_will(topic, payload, qos, retain) + + def clear_last_will(self): + self._logger.info("Clearing last will...") + self._internal_async_client.clear_last_will() + + def configure_username_password(self, username, password=None): + self._logger.info("Configuring username and password...") + self._username = username + self._password = password + + def configure_socket_factory(self, socket_factory): + self._logger.info("Configuring socket factory...") + self._internal_async_client.set_socket_factory(socket_factory) + + def enable_metrics_collection(self): + self._enable_metrics_collection = True + + def disable_metrics_collection(self): + self._enable_metrics_collection = False + + def configure_offline_requests_queue(self, max_size, drop_behavior): + self._logger.info("Configuring offline requests queueing: max queue size: %d", max_size) + self._offline_requests_manager = OfflineRequestsManager(max_size, drop_behavior) + self._event_consumer.update_offline_requests_manager(self._offline_requests_manager) + + def configure_draining_interval_sec(self, draining_interval_sec): + self._logger.info("Configuring offline requests queue draining interval: %f sec", draining_interval_sec) + self._event_consumer.update_draining_interval_sec(draining_interval_sec) + + def connect(self, keep_alive_sec): + self._logger.info("Performing sync connect...") + event = Event() + self.connect_async(keep_alive_sec, self._create_blocking_ack_callback(event)) + if not event.wait(self._connect_disconnect_timeout_sec): + self._logger.error("Connect timed out") + raise connectTimeoutException() + return True + + def connect_async(self, keep_alive_sec, ack_callback=None): + self._logger.info("Performing async connect...") + self._logger.info("Keep-alive: %f sec" % keep_alive_sec) + self._start_workers() + self._load_callbacks() + self._load_username_password() + + try: + self._client_status.set_status(ClientStatus.CONNECT) + rc = self._internal_async_client.connect(keep_alive_sec, ack_callback) + if MQTT_ERR_SUCCESS != rc: + self._logger.error("Connect error: %d", rc) + raise connectError(rc) + except Exception as e: + # Provided any error in connect, we should clean up the threads that have been created + self._event_consumer.stop() + if not self._event_consumer.wait_until_it_stops(self._connect_disconnect_timeout_sec): + self._logger.error("Time out in waiting for event consumer to stop") + else: + self._logger.debug("Event consumer stopped") + self._client_status.set_status(ClientStatus.IDLE) + raise e + + return FixedEventMids.CONNACK_MID + + def _load_callbacks(self): + self._logger.debug("Passing in general notification callbacks to internal client...") + self._internal_async_client.on_online = self.on_online + self._internal_async_client.on_offline = self.on_offline + self._internal_async_client.on_message = self.on_message + + def _load_username_password(self): + username_candidate = self._username + if self._enable_metrics_collection: + username_candidate += METRICS_PREFIX + username_candidate += AWSIoTPythonSDK.__version__ + self._internal_async_client.set_username_password(username_candidate, self._password) + + def disconnect(self): + self._logger.info("Performing sync disconnect...") + event = Event() + self.disconnect_async(self._create_blocking_ack_callback(event)) + if not event.wait(self._connect_disconnect_timeout_sec): + self._logger.error("Disconnect timed out") + raise disconnectTimeoutException() + if not self._event_consumer.wait_until_it_stops(self._connect_disconnect_timeout_sec): + self._logger.error("Disconnect timed out in waiting for event consumer") + raise disconnectTimeoutException() + return True + + def disconnect_async(self, ack_callback=None): + self._logger.info("Performing async disconnect...") + self._client_status.set_status(ClientStatus.USER_DISCONNECT) + rc = self._internal_async_client.disconnect(ack_callback) + if MQTT_ERR_SUCCESS != rc: + self._logger.error("Disconnect error: %d", rc) + raise disconnectError(rc) + return FixedEventMids.DISCONNECT_MID + + def publish(self, topic, payload, qos, retain=False): + self._logger.info("Performing sync publish...") + ret = False + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.PUBLISH, (topic, payload, qos, retain)) + else: + if qos > 0: + event = Event() + rc, mid = self._publish_async(topic, payload, qos, retain, self._create_blocking_ack_callback(event)) + if not event.wait(self._operation_timeout_sec): + self._internal_async_client.remove_event_callback(mid) + self._logger.error("Publish timed out") + raise publishTimeoutException() + else: + self._publish_async(topic, payload, qos, retain) + ret = True + return ret + + def publish_async(self, topic, payload, qos, retain=False, ack_callback=None): + self._logger.info("Performing async publish...") + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.PUBLISH, (topic, payload, qos, retain)) + return FixedEventMids.QUEUED_MID + else: + rc, mid = self._publish_async(topic, payload, qos, retain, ack_callback) + return mid + + def _publish_async(self, topic, payload, qos, retain=False, ack_callback=None): + rc, mid = self._internal_async_client.publish(topic, payload, qos, retain, ack_callback) + if MQTT_ERR_SUCCESS != rc: + self._logger.error("Publish error: %d", rc) + raise publishError(rc) + return rc, mid + + def subscribe(self, topic, qos, message_callback=None): + self._logger.info("Performing sync subscribe...") + ret = False + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.SUBSCRIBE, (topic, qos, message_callback, None)) + else: + event = Event() + rc, mid = self._subscribe_async(topic, qos, self._create_blocking_ack_callback(event), message_callback) + if not event.wait(self._operation_timeout_sec): + self._internal_async_client.remove_event_callback(mid) + self._logger.error("Subscribe timed out") + raise subscribeTimeoutException() + ret = True + return ret + + def subscribe_async(self, topic, qos, ack_callback=None, message_callback=None): + self._logger.info("Performing async subscribe...") + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.SUBSCRIBE, (topic, qos, message_callback, ack_callback)) + return FixedEventMids.QUEUED_MID + else: + rc, mid = self._subscribe_async(topic, qos, ack_callback, message_callback) + return mid + + def _subscribe_async(self, topic, qos, ack_callback=None, message_callback=None): + self._subscription_manager.add_record(topic, qos, message_callback, ack_callback) + rc, mid = self._internal_async_client.subscribe(topic, qos, ack_callback) + if MQTT_ERR_SUCCESS != rc: + self._logger.error("Subscribe error: %d", rc) + raise subscribeError(rc) + return rc, mid + + def unsubscribe(self, topic): + self._logger.info("Performing sync unsubscribe...") + ret = False + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.UNSUBSCRIBE, (topic, None)) + else: + event = Event() + rc, mid = self._unsubscribe_async(topic, self._create_blocking_ack_callback(event)) + if not event.wait(self._operation_timeout_sec): + self._internal_async_client.remove_event_callback(mid) + self._logger.error("Unsubscribe timed out") + raise unsubscribeTimeoutException() + ret = True + return ret + + def unsubscribe_async(self, topic, ack_callback=None): + self._logger.info("Performing async unsubscribe...") + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.UNSUBSCRIBE, (topic, ack_callback)) + return FixedEventMids.QUEUED_MID + else: + rc, mid = self._unsubscribe_async(topic, ack_callback) + return mid + + def _unsubscribe_async(self, topic, ack_callback=None): + self._subscription_manager.remove_record(topic) + rc, mid = self._internal_async_client.unsubscribe(topic, ack_callback) + if MQTT_ERR_SUCCESS != rc: + self._logger.error("Unsubscribe error: %d", rc) + raise unsubscribeError(rc) + return rc, mid + + def _create_blocking_ack_callback(self, event): + def ack_callback(mid, data=None): + event.set() + return ack_callback + + def _handle_offline_request(self, type, data): + self._logger.info("Offline request detected!") + offline_request = QueueableRequest(type, data) + append_result = self._offline_requests_manager.add_one(offline_request) + if AppendResults.APPEND_FAILURE_QUEUE_DISABLED == append_result: + self._logger.error("Offline request queue has been disabled") + raise self._offline_request_queue_disabled_exceptions[type] + if AppendResults.APPEND_FAILURE_QUEUE_FULL == append_result: + self._logger.error("Offline request queue is full") + raise self._offline_request_queue_full_exceptions[type] diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/paho/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/paho/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/paho/client.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/paho/client.py new file mode 100644 index 0000000..503d1c6 --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/protocol/paho/client.py @@ -0,0 +1,2445 @@ +# Copyright (c) 2012-2014 Roger Light +# +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the Eclipse Public License v1.0 +# and Eclipse Distribution License v1.0 which accompany this distribution. +# +# The Eclipse Public License is available at +# http://www.eclipse.org/legal/epl-v10.html +# and the Eclipse Distribution License is available at +# http://www.eclipse.org/org/documents/edl-v10.php. +# +# Contributors: +# Roger Light - initial API and implementation + +""" +This is an MQTT v3.1 client module. MQTT is a lightweight pub/sub messaging +protocol that is easy to implement and suitable for low powered devices. +""" +import errno +import platform +import random +import select +import socket +HAVE_SSL = True +try: + import ssl + cert_reqs = ssl.CERT_REQUIRED + tls_version = ssl.PROTOCOL_TLSv1 +except: + HAVE_SSL = False + cert_reqs = None + tls_version = None +import struct +import sys +import threading +import time +HAVE_DNS = True +try: + import dns.resolver +except ImportError: + HAVE_DNS = False + +if platform.system() == 'Windows': + EAGAIN = errno.WSAEWOULDBLOCK +else: + EAGAIN = errno.EAGAIN + +from AWSIoTPythonSDK.core.protocol.connection.cores import ProgressiveBackOffCore +from AWSIoTPythonSDK.core.protocol.connection.cores import SecuredWebSocketCore +from AWSIoTPythonSDK.core.protocol.connection.alpn import SSLContextBuilder + +VERSION_MAJOR=1 +VERSION_MINOR=0 +VERSION_REVISION=0 +VERSION_NUMBER=(VERSION_MAJOR*1000000+VERSION_MINOR*1000+VERSION_REVISION) + +MQTTv31 = 3 +MQTTv311 = 4 + +if sys.version_info[0] < 3: + PROTOCOL_NAMEv31 = "MQIsdp" + PROTOCOL_NAMEv311 = "MQTT" +else: + PROTOCOL_NAMEv31 = b"MQIsdp" + PROTOCOL_NAMEv311 = b"MQTT" + +PROTOCOL_VERSION = 3 + +# Message types +CONNECT = 0x10 +CONNACK = 0x20 +PUBLISH = 0x30 +PUBACK = 0x40 +PUBREC = 0x50 +PUBREL = 0x60 +PUBCOMP = 0x70 +SUBSCRIBE = 0x80 +SUBACK = 0x90 +UNSUBSCRIBE = 0xA0 +UNSUBACK = 0xB0 +PINGREQ = 0xC0 +PINGRESP = 0xD0 +DISCONNECT = 0xE0 + +# Log levels +MQTT_LOG_INFO = 0x01 +MQTT_LOG_NOTICE = 0x02 +MQTT_LOG_WARNING = 0x04 +MQTT_LOG_ERR = 0x08 +MQTT_LOG_DEBUG = 0x10 + +# CONNACK codes +CONNACK_ACCEPTED = 0 +CONNACK_REFUSED_PROTOCOL_VERSION = 1 +CONNACK_REFUSED_IDENTIFIER_REJECTED = 2 +CONNACK_REFUSED_SERVER_UNAVAILABLE = 3 +CONNACK_REFUSED_BAD_USERNAME_PASSWORD = 4 +CONNACK_REFUSED_NOT_AUTHORIZED = 5 + +# Connection state +mqtt_cs_new = 0 +mqtt_cs_connected = 1 +mqtt_cs_disconnecting = 2 +mqtt_cs_connect_async = 3 + +# Message state +mqtt_ms_invalid = 0 +mqtt_ms_publish= 1 +mqtt_ms_wait_for_puback = 2 +mqtt_ms_wait_for_pubrec = 3 +mqtt_ms_resend_pubrel = 4 +mqtt_ms_wait_for_pubrel = 5 +mqtt_ms_resend_pubcomp = 6 +mqtt_ms_wait_for_pubcomp = 7 +mqtt_ms_send_pubrec = 8 +mqtt_ms_queued = 9 + +# Error values +MQTT_ERR_AGAIN = -1 +MQTT_ERR_SUCCESS = 0 +MQTT_ERR_NOMEM = 1 +MQTT_ERR_PROTOCOL = 2 +MQTT_ERR_INVAL = 3 +MQTT_ERR_NO_CONN = 4 +MQTT_ERR_CONN_REFUSED = 5 +MQTT_ERR_NOT_FOUND = 6 +MQTT_ERR_CONN_LOST = 7 +MQTT_ERR_TLS = 8 +MQTT_ERR_PAYLOAD_SIZE = 9 +MQTT_ERR_NOT_SUPPORTED = 10 +MQTT_ERR_AUTH = 11 +MQTT_ERR_ACL_DENIED = 12 +MQTT_ERR_UNKNOWN = 13 +MQTT_ERR_ERRNO = 14 + +# MessageQueueing DropBehavior +MSG_QUEUEING_DROP_OLDEST = 0 +MSG_QUEUEING_DROP_NEWEST = 1 + +if sys.version_info[0] < 3: + sockpair_data = "0" +else: + sockpair_data = b"0" + +def error_string(mqtt_errno): + """Return the error string associated with an mqtt error number.""" + if mqtt_errno == MQTT_ERR_SUCCESS: + return "No error." + elif mqtt_errno == MQTT_ERR_NOMEM: + return "Out of memory." + elif mqtt_errno == MQTT_ERR_PROTOCOL: + return "A network protocol error occurred when communicating with the broker." + elif mqtt_errno == MQTT_ERR_INVAL: + return "Invalid function arguments provided." + elif mqtt_errno == MQTT_ERR_NO_CONN: + return "The client is not currently connected." + elif mqtt_errno == MQTT_ERR_CONN_REFUSED: + return "The connection was refused." + elif mqtt_errno == MQTT_ERR_NOT_FOUND: + return "Message not found (internal error)." + elif mqtt_errno == MQTT_ERR_CONN_LOST: + return "The connection was lost." + elif mqtt_errno == MQTT_ERR_TLS: + return "A TLS error occurred." + elif mqtt_errno == MQTT_ERR_PAYLOAD_SIZE: + return "Payload too large." + elif mqtt_errno == MQTT_ERR_NOT_SUPPORTED: + return "This feature is not supported." + elif mqtt_errno == MQTT_ERR_AUTH: + return "Authorisation failed." + elif mqtt_errno == MQTT_ERR_ACL_DENIED: + return "Access denied by ACL." + elif mqtt_errno == MQTT_ERR_UNKNOWN: + return "Unknown error." + elif mqtt_errno == MQTT_ERR_ERRNO: + return "Error defined by errno." + else: + return "Unknown error." + + +def connack_string(connack_code): + """Return the string associated with a CONNACK result.""" + if connack_code == 0: + return "Connection Accepted." + elif connack_code == 1: + return "Connection Refused: unacceptable protocol version." + elif connack_code == 2: + return "Connection Refused: identifier rejected." + elif connack_code == 3: + return "Connection Refused: broker unavailable." + elif connack_code == 4: + return "Connection Refused: bad user name or password." + elif connack_code == 5: + return "Connection Refused: not authorised." + else: + return "Connection Refused: unknown reason." + + +def topic_matches_sub(sub, topic): + """Check whether a topic matches a subscription. + + For example: + + foo/bar would match the subscription foo/# or +/bar + non/matching would not match the subscription non/+/+ + """ + result = True + multilevel_wildcard = False + + slen = len(sub) + tlen = len(topic) + + if slen > 0 and tlen > 0: + if (sub[0] == '$' and topic[0] != '$') or (topic[0] == '$' and sub[0] != '$'): + return False + + spos = 0 + tpos = 0 + + while spos < slen and tpos < tlen: + if sub[spos] == topic[tpos]: + if tpos == tlen-1: + # Check for e.g. foo matching foo/# + if spos == slen-3 and sub[spos+1] == '/' and sub[spos+2] == '#': + result = True + multilevel_wildcard = True + break + + spos += 1 + tpos += 1 + + if tpos == tlen and spos == slen-1 and sub[spos] == '+': + spos += 1 + result = True + break + else: + if sub[spos] == '+': + spos += 1 + while tpos < tlen and topic[tpos] != '/': + tpos += 1 + if tpos == tlen and spos == slen: + result = True + break + + elif sub[spos] == '#': + multilevel_wildcard = True + if spos+1 != slen: + result = False + break + else: + result = True + break + + else: + result = False + break + + if not multilevel_wildcard and (tpos < tlen or spos < slen): + result = False + + return result + + +def _socketpair_compat(): + """TCP/IP socketpair including Windows support""" + listensock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_IP) + listensock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + listensock.bind(("127.0.0.1", 0)) + listensock.listen(1) + + iface, port = listensock.getsockname() + sock1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_IP) + sock1.setblocking(0) + try: + sock1.connect(("127.0.0.1", port)) + except socket.error as err: + if err.errno != errno.EINPROGRESS and err.errno != errno.EWOULDBLOCK and err.errno != EAGAIN: + raise + sock2, address = listensock.accept() + sock2.setblocking(0) + listensock.close() + return (sock1, sock2) + + +class MQTTMessage: + """ This is a class that describes an incoming message. It is passed to the + on_message callback as the message parameter. + + Members: + + topic : String. topic that the message was published on. + payload : String/bytes the message payload. + qos : Integer. The message Quality of Service 0, 1 or 2. + retain : Boolean. If true, the message is a retained message and not fresh. + mid : Integer. The message id. + """ + def __init__(self): + self.timestamp = 0 + self.state = mqtt_ms_invalid + self.dup = False + self.mid = 0 + self.topic = "" + self.payload = None + self.qos = 0 + self.retain = False + + +class Client(object): + """MQTT version 3.1/3.1.1 client class. + + This is the main class for use communicating with an MQTT broker. + + General usage flow: + + * Use connect()/connect_async() to connect to a broker + * Call loop() frequently to maintain network traffic flow with the broker + * Or use loop_start() to set a thread running to call loop() for you. + * Or use loop_forever() to handle calling loop() for you in a blocking + * function. + * Use subscribe() to subscribe to a topic and receive messages + * Use publish() to send messages + * Use disconnect() to disconnect from the broker + + Data returned from the broker is made available with the use of callback + functions as described below. + + Callbacks + ========= + + A number of callback functions are available to receive data back from the + broker. To use a callback, define a function and then assign it to the + client: + + def on_connect(client, userdata, flags, rc): + print("Connection returned " + str(rc)) + + client.on_connect = on_connect + + All of the callbacks as described below have a "client" and an "userdata" + argument. "client" is the Client instance that is calling the callback. + "userdata" is user data of any type and can be set when creating a new client + instance or with user_data_set(userdata). + + The callbacks: + + on_connect(client, userdata, flags, rc): called when the broker responds to our connection + request. + flags is a dict that contains response flags from the broker: + flags['session present'] - this flag is useful for clients that are + using clean session set to 0 only. If a client with clean + session=0, that reconnects to a broker that it has previously + connected to, this flag indicates whether the broker still has the + session information for the client. If 1, the session still exists. + The value of rc determines success or not: + 0: Connection successful + 1: Connection refused - incorrect protocol version + 2: Connection refused - invalid client identifier + 3: Connection refused - server unavailable + 4: Connection refused - bad username or password + 5: Connection refused - not authorised + 6-255: Currently unused. + + on_disconnect(client, userdata, rc): called when the client disconnects from the broker. + The rc parameter indicates the disconnection state. If MQTT_ERR_SUCCESS + (0), the callback was called in response to a disconnect() call. If any + other value the disconnection was unexpected, such as might be caused by + a network error. + + on_message(client, userdata, message): called when a message has been received on a + topic that the client subscribes to. The message variable is a + MQTTMessage that describes all of the message parameters. + + on_publish(client, userdata, mid): called when a message that was to be sent using the + publish() call has completed transmission to the broker. For messages + with QoS levels 1 and 2, this means that the appropriate handshakes have + completed. For QoS 0, this simply means that the message has left the + client. The mid variable matches the mid variable returned from the + corresponding publish() call, to allow outgoing messages to be tracked. + This callback is important because even if the publish() call returns + success, it does not always mean that the message has been sent. + + on_subscribe(client, userdata, mid, granted_qos): called when the broker responds to a + subscribe request. The mid variable matches the mid variable returned + from the corresponding subscribe() call. The granted_qos variable is a + list of integers that give the QoS level the broker has granted for each + of the different subscription requests. + + on_unsubscribe(client, userdata, mid): called when the broker responds to an unsubscribe + request. The mid variable matches the mid variable returned from the + corresponding unsubscribe() call. + + on_log(client, userdata, level, buf): called when the client has log information. Define + to allow debugging. The level variable gives the severity of the message + and will be one of MQTT_LOG_INFO, MQTT_LOG_NOTICE, MQTT_LOG_WARNING, + MQTT_LOG_ERR, and MQTT_LOG_DEBUG. The message itself is in buf. + + """ + def __init__(self, client_id="", clean_session=True, userdata=None, protocol=MQTTv31, useSecuredWebsocket=False): + """client_id is the unique client id string used when connecting to the + broker. If client_id is zero length or None, then one will be randomly + generated. In this case, clean_session must be True. If this is not the + case a ValueError will be raised. + + clean_session is a boolean that determines the client type. If True, + the broker will remove all information about this client when it + disconnects. If False, the client is a persistent client and + subscription information and queued messages will be retained when the + client disconnects. + Note that a client will never discard its own outgoing messages on + disconnect. Calling connect() or reconnect() will cause the messages to + be resent. Use reinitialise() to reset a client to its original state. + + userdata is user defined data of any type that is passed as the "userdata" + parameter to callbacks. It may be updated at a later point with the + user_data_set() function. + + The protocol argument allows explicit setting of the MQTT version to + use for this client. Can be paho.mqtt.client.MQTTv311 (v3.1.1) or + paho.mqtt.client.MQTTv31 (v3.1), with the default being v3.1. If the + broker reports that the client connected with an invalid protocol + version, the client will automatically attempt to reconnect using v3.1 + instead. + + useSecuredWebsocket is a boolean that determines whether the client uses + MQTT over Websocket with sigV4 signing (True) or MQTT with plain TCP + socket. If True, the client will try to find AWS_ACCESS_KEY_ID and + AWS_SECRET_ACCESS_KEY in the system environment variables and start the + sigV4 signing and Websocket handshake. Under this configuration, all + outbound MQTT packets will be wrapped around with Websocket framework. All + inbound MQTT packets will be automatically wss-decoded. + """ + if not clean_session and (client_id == "" or client_id is None): + raise ValueError('A client id must be provided if clean session is False.') + + self._protocol = protocol + self._userdata = userdata + self._sock = None + self._sockpairR, self._sockpairW = _socketpair_compat() + self._keepalive = 60 + self._message_retry = 20 + self._last_retry_check = 0 + self._clean_session = clean_session + if client_id == "" or client_id is None: + self._client_id = "paho/" + "".join(random.choice("0123456789ADCDEF") for x in range(23-5)) + else: + self._client_id = client_id + + self._username = "" + self._password = "" + self._in_packet = { + "command": 0, + "have_remaining": 0, + "remaining_count": [], + "remaining_mult": 1, + "remaining_length": 0, + "packet": b"", + "to_process": 0, + "pos": 0} + self._out_packet = [] + self._current_out_packet = None + self._last_msg_in = time.time() + self._last_msg_out = time.time() + self._ping_t = 0 + self._last_mid = 0 + self._state = mqtt_cs_new + self._max_inflight_messages = 20 + self._out_messages = [] + self._in_messages = [] + self._inflight_messages = 0 + self._will = False + self._will_topic = "" + self._will_payload = None + self._will_qos = 0 + self._will_retain = False + self.on_disconnect = None + self.on_connect = None + self.on_publish = None + self.on_message = None + self.on_message_filtered = [] + self.on_subscribe = None + self.on_unsubscribe = None + self.on_log = None + self._host = "" + self._port = 1883 + self._bind_address = "" + self._socket_factory = None + self._in_callback = False + self._strict_protocol = False + self._callback_mutex = threading.Lock() + self._state_mutex = threading.Lock() + self._out_packet_mutex = threading.Lock() + self._current_out_packet_mutex = threading.Lock() + self._msgtime_mutex = threading.Lock() + self._out_message_mutex = threading.Lock() + self._in_message_mutex = threading.Lock() + self._thread = None + self._thread_terminate = False + self._ssl = None + self._tls_certfile = None + self._tls_keyfile = None + self._tls_ca_certs = None + self._tls_cert_reqs = None + self._tls_ciphers = None + self._tls_version = tls_version + self._tls_insecure = False + self._useSecuredWebsocket = useSecuredWebsocket # Do we enable secured websocket + self._backoffCore = ProgressiveBackOffCore() # Init the backoffCore using default configuration + self._AWSAccessKeyIDCustomConfig = "" + self._AWSSecretAccessKeyCustomConfig = "" + self._AWSSessionTokenCustomConfig = "" + self._alpn_protocols = None + + def __del__(self): + pass + + + def setBackoffTiming(self, srcBaseReconnectTimeSecond, srcMaximumReconnectTimeSecond, srcMinimumConnectTimeSecond): + """ + Make custom settings for backoff timing for reconnect logic + srcBaseReconnectTimeSecond - The base reconnection time in seconds + srcMaximumReconnectTimeSecond - The maximum reconnection time in seconds + srcMinimumConnectTimeSecond - The minimum time in seconds that a connection must be maintained in order to be considered stable + * Raise ValueError if input params are malformed + """ + self._backoffCore.configTime(srcBaseReconnectTimeSecond, srcMaximumReconnectTimeSecond, srcMinimumConnectTimeSecond) + + def configIAMCredentials(self, srcAWSAccessKeyID, srcAWSSecretAccessKey, srcAWSSessionToken): + """ + Make custom settings for IAM credentials for websocket connection + srcAWSAccessKeyID - AWS IAM access key + srcAWSSecretAccessKey - AWS IAM secret key + srcAWSSessionToken - AWS Session Token + """ + self._AWSAccessKeyIDCustomConfig = srcAWSAccessKeyID + self._AWSSecretAccessKeyCustomConfig = srcAWSSecretAccessKey + self._AWSSessionTokenCustomConfig = srcAWSSessionToken + + def config_alpn_protocols(self, alpn_protocols): + """ + Make custom settings for ALPN protocols + :param alpn_protocols: Array of strings that specifies the alpn protocols to be used + :return: None + """ + self._alpn_protocols = alpn_protocols + + def reinitialise(self, client_id="", clean_session=True, userdata=None): + if self._ssl: + self._ssl.close() + self._ssl = None + self._sock = None + elif self._sock: + self._sock.close() + self._sock = None + if self._sockpairR: + self._sockpairR.close() + self._sockpairR = None + if self._sockpairW: + self._sockpairW.close() + self._sockpairW = None + + self.__init__(client_id, clean_session, userdata) + + def tls_set(self, ca_certs, certfile=None, keyfile=None, cert_reqs=cert_reqs, tls_version=tls_version, ciphers=None): + """Configure network encryption and authentication options. Enables SSL/TLS support. + + ca_certs : a string path to the Certificate Authority certificate files + that are to be treated as trusted by this client. If this is the only + option given then the client will operate in a similar manner to a web + browser. That is to say it will require the broker to have a + certificate signed by the Certificate Authorities in ca_certs and will + communicate using TLS v1, but will not attempt any form of + authentication. This provides basic network encryption but may not be + sufficient depending on how the broker is configured. + + certfile and keyfile are strings pointing to the PEM encoded client + certificate and private keys respectively. If these arguments are not + None then they will be used as client information for TLS based + authentication. Support for this feature is broker dependent. Note + that if either of these files in encrypted and needs a password to + decrypt it, Python will ask for the password at the command line. It is + not currently possible to define a callback to provide the password. + + cert_reqs allows the certificate requirements that the client imposes + on the broker to be changed. By default this is ssl.CERT_REQUIRED, + which means that the broker must provide a certificate. See the ssl + pydoc for more information on this parameter. + + tls_version allows the version of the SSL/TLS protocol used to be + specified. By default TLS v1 is used. Previous versions (all versions + beginning with SSL) are possible but not recommended due to possible + security problems. + + ciphers is a string specifying which encryption ciphers are allowable + for this connection, or None to use the defaults. See the ssl pydoc for + more information. + + Must be called before connect() or connect_async().""" + if HAVE_SSL is False: + raise ValueError('This platform has no SSL/TLS.') + + if sys.version < '2.7': + raise ValueError('Python 2.7 is the minimum supported version for TLS.') + + if ca_certs is None: + raise ValueError('ca_certs must not be None.') + + try: + f = open(ca_certs, "r") + except IOError as err: + raise IOError(ca_certs+": "+err.strerror) + else: + f.close() + if certfile is not None: + try: + f = open(certfile, "r") + except IOError as err: + raise IOError(certfile+": "+err.strerror) + else: + f.close() + if keyfile is not None: + try: + f = open(keyfile, "r") + except IOError as err: + raise IOError(keyfile+": "+err.strerror) + else: + f.close() + + self._tls_ca_certs = ca_certs + self._tls_certfile = certfile + self._tls_keyfile = keyfile + self._tls_cert_reqs = cert_reqs + self._tls_version = tls_version + self._tls_ciphers = ciphers + + def tls_insecure_set(self, value): + """Configure verification of the server hostname in the server certificate. + + If value is set to true, it is impossible to guarantee that the host + you are connecting to is not impersonating your server. This can be + useful in initial server testing, but makes it possible for a malicious + third party to impersonate your server through DNS spoofing, for + example. + + Do not use this function in a real system. Setting value to true means + there is no point using encryption. + + Must be called before connect().""" + if HAVE_SSL is False: + raise ValueError('This platform has no SSL/TLS.') + + self._tls_insecure = value + + def connect(self, host, port=1883, keepalive=60, bind_address=""): + """Connect to a remote broker. + + host is the hostname or IP address of the remote broker. + port is the network port of the server host to connect to. Defaults to + 1883. Note that the default port for MQTT over SSL/TLS is 8883 so if you + are using tls_set() the port may need providing. + keepalive: Maximum period in seconds between communications with the + broker. If no other messages are being exchanged, this controls the + rate at which the client will send ping messages to the broker. + """ + self.connect_async(host, port, keepalive, bind_address) + return self.reconnect() + + def connect_srv(self, domain=None, keepalive=60, bind_address=""): + """Connect to a remote broker. + + domain is the DNS domain to search for SRV records; if None, + try to determine local domain name. + keepalive and bind_address are as for connect() + """ + + if HAVE_DNS is False: + raise ValueError('No DNS resolver library found.') + + if domain is None: + domain = socket.getfqdn() + domain = domain[domain.find('.') + 1:] + + try: + rr = '_mqtt._tcp.%s' % domain + if self._ssl is not None: + # IANA specifies secure-mqtt (not mqtts) for port 8883 + rr = '_secure-mqtt._tcp.%s' % domain + answers = [] + for answer in dns.resolver.query(rr, dns.rdatatype.SRV): + addr = answer.target.to_text()[:-1] + answers.append((addr, answer.port, answer.priority, answer.weight)) + except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer, dns.resolver.NoNameservers): + raise ValueError("No answer/NXDOMAIN for SRV in %s" % (domain)) + + # FIXME: doesn't account for weight + for answer in answers: + host, port, prio, weight = answer + + try: + return self.connect(host, port, keepalive, bind_address) + except: + pass + + raise ValueError("No SRV hosts responded") + + def connect_async(self, host, port=1883, keepalive=60, bind_address=""): + """Connect to a remote broker asynchronously. This is a non-blocking + connect call that can be used with loop_start() to provide very quick + start. + + host is the hostname or IP address of the remote broker. + port is the network port of the server host to connect to. Defaults to + 1883. Note that the default port for MQTT over SSL/TLS is 8883 so if you + are using tls_set() the port may need providing. + keepalive: Maximum period in seconds between communications with the + broker. If no other messages are being exchanged, this controls the + rate at which the client will send ping messages to the broker. + """ + if host is None or len(host) == 0: + raise ValueError('Invalid host.') + if port <= 0: + raise ValueError('Invalid port number.') + if keepalive < 0: + raise ValueError('Keepalive must be >=0.') + if bind_address != "" and bind_address is not None: + if (sys.version_info[0] == 2 and sys.version_info[1] < 7) or (sys.version_info[0] == 3 and sys.version_info[1] < 2): + raise ValueError('bind_address requires Python 2.7 or 3.2.') + + self._host = host + self._port = port + self._keepalive = keepalive + self._bind_address = bind_address + + self._state_mutex.acquire() + self._state = mqtt_cs_connect_async + self._state_mutex.release() + + def reconnect(self): + """Reconnect the client after a disconnect. Can only be called after + connect()/connect_async().""" + if len(self._host) == 0: + raise ValueError('Invalid host.') + if self._port <= 0: + raise ValueError('Invalid port number.') + + self._in_packet = { + "command": 0, + "have_remaining": 0, + "remaining_count": [], + "remaining_mult": 1, + "remaining_length": 0, + "packet": b"", + "to_process": 0, + "pos": 0} + + self._out_packet_mutex.acquire() + self._out_packet = [] + self._out_packet_mutex.release() + + self._current_out_packet_mutex.acquire() + self._current_out_packet = None + self._current_out_packet_mutex.release() + + self._msgtime_mutex.acquire() + self._last_msg_in = time.time() + self._last_msg_out = time.time() + self._msgtime_mutex.release() + + self._ping_t = 0 + self._state_mutex.acquire() + self._state = mqtt_cs_new + self._state_mutex.release() + if self._ssl: + self._ssl.close() + self._ssl = None + self._sock = None + elif self._sock: + self._sock.close() + self._sock = None + + # Put messages in progress in a valid state. + self._messages_reconnect_reset() + + try: + if self._socket_factory: + sock = self._socket_factory() + elif (sys.version_info[0] == 2 and sys.version_info[1] < 7) or (sys.version_info[0] == 3 and sys.version_info[1] < 2): + sock = socket.create_connection((self._host, self._port)) + else: + sock = socket.create_connection((self._host, self._port), source_address=(self._bind_address, 0)) + except socket.error as err: + if err.errno != errno.EINPROGRESS and err.errno != errno.EWOULDBLOCK and err.errno != EAGAIN: + raise + + verify_hostname = self._tls_insecure is False # Decide whether we need to verify hostname + + if self._tls_ca_certs is not None: + if self._useSecuredWebsocket: + # Never assign to ._ssl before wss handshake is finished + # Non-None value for ._ssl will allow ops before wss-MQTT connection is established + rawSSL = ssl.wrap_socket(sock, ca_certs=self._tls_ca_certs, cert_reqs=ssl.CERT_REQUIRED) # Add server certificate verification + rawSSL.setblocking(0) # Non-blocking socket + self._ssl = SecuredWebSocketCore(rawSSL, self._host, self._port, self._AWSAccessKeyIDCustomConfig, self._AWSSecretAccessKeyCustomConfig, self._AWSSessionTokenCustomConfig) # Override the _ssl socket + # self._ssl.enableDebug() + elif self._alpn_protocols is not None: + # SSLContext is required to enable ALPN support + # Assuming Python 2.7.10+/3.5+ till the end of this elif branch + ssl_context = SSLContextBuilder()\ + .with_ca_certs(self._tls_ca_certs)\ + .with_cert_key_pair(self._tls_certfile, self._tls_keyfile)\ + .with_cert_reqs(self._tls_cert_reqs)\ + .with_check_hostname(True)\ + .with_ciphers(self._tls_ciphers)\ + .with_alpn_protocols(self._alpn_protocols)\ + .build() + self._ssl = ssl_context.wrap_socket(sock, server_hostname=self._host, do_handshake_on_connect=False) + verify_hostname = False # Since check_hostname in SSLContext is already set to True, no need to verify it again + self._ssl.do_handshake() + else: + self._ssl = ssl.wrap_socket( + sock, + certfile=self._tls_certfile, + keyfile=self._tls_keyfile, + ca_certs=self._tls_ca_certs, + cert_reqs=self._tls_cert_reqs, + ssl_version=self._tls_version, + ciphers=self._tls_ciphers) + + if verify_hostname: + if sys.version_info[0] < 3 or (sys.version_info[0] == 3 and sys.version_info[1] < 5): # No IP host match before 3.5.x + self._tls_match_hostname() + else: + ssl.match_hostname(self._ssl.getpeercert(), self._host) + + self._sock = sock + + if self._ssl and not self._useSecuredWebsocket: + self._ssl.setblocking(0) # For X.509 cert mutual auth. + elif not self._ssl: + self._sock.setblocking(0) # For plain socket + else: + pass # For MQTT over WebSocket + + return self._send_connect(self._keepalive, self._clean_session) + + def loop(self, timeout=1.0, max_packets=1): + """Process network events. + + This function must be called regularly to ensure communication with the + broker is carried out. It calls select() on the network socket to wait + for network events. If incoming data is present it will then be + processed. Outgoing commands, from e.g. publish(), are normally sent + immediately that their function is called, but this is not always + possible. loop() will also attempt to send any remaining outgoing + messages, which also includes commands that are part of the flow for + messages with QoS>0. + + timeout: The time in seconds to wait for incoming/outgoing network + traffic before timing out and returning. + max_packets: Not currently used. + + Returns MQTT_ERR_SUCCESS on success. + Returns >0 on error. + + A ValueError will be raised if timeout < 0""" + if timeout < 0.0: + raise ValueError('Invalid timeout.') + + self._current_out_packet_mutex.acquire() + self._out_packet_mutex.acquire() + if self._current_out_packet is None and len(self._out_packet) > 0: + self._current_out_packet = self._out_packet.pop(0) + + if self._current_out_packet: + wlist = [self.socket()] + else: + wlist = [] + self._out_packet_mutex.release() + self._current_out_packet_mutex.release() + + # sockpairR is used to break out of select() before the timeout, on a + # call to publish() etc. + rlist = [self.socket(), self._sockpairR] + try: + socklist = select.select(rlist, wlist, [], timeout) + except TypeError as e: + # Socket isn't correct type, in likelihood connection is lost + return MQTT_ERR_CONN_LOST + except ValueError: + # Can occur if we just reconnected but rlist/wlist contain a -1 for + # some reason. + return MQTT_ERR_CONN_LOST + except: + return MQTT_ERR_UNKNOWN + + if self.socket() in socklist[0]: + rc = self.loop_read(max_packets) + if rc or (self._ssl is None and self._sock is None): + return rc + + if self._sockpairR in socklist[0]: + # Stimulate output write even though we didn't ask for it, because + # at that point the publish or other command wasn't present. + socklist[1].insert(0, self.socket()) + # Clear sockpairR - only ever a single byte written. + try: + self._sockpairR.recv(1) + except socket.error as err: + if err.errno != EAGAIN: + raise + + if self.socket() in socklist[1]: + rc = self.loop_write(max_packets) + if rc or (self._ssl is None and self._sock is None): + return rc + + return self.loop_misc() + + def publish(self, topic, payload=None, qos=0, retain=False): + """Publish a message on a topic. + + This causes a message to be sent to the broker and subsequently from + the broker to any clients subscribing to matching topics. + + topic: The topic that the message should be published on. + payload: The actual message to send. If not given, or set to None a + zero length message will be used. Passing an int or float will result + in the payload being converted to a string representing that number. If + you wish to send a true int/float, use struct.pack() to create the + payload you require. + qos: The quality of service level to use. + retain: If set to true, the message will be set as the "last known + good"/retained message for the topic. + + Returns a tuple (result, mid), where result is MQTT_ERR_SUCCESS to + indicate success or MQTT_ERR_NO_CONN if the client is not currently + connected. mid is the message ID for the publish request. The mid + value can be used to track the publish request by checking against the + mid argument in the on_publish() callback if it is defined. + + A ValueError will be raised if topic is None, has zero length or is + invalid (contains a wildcard), if qos is not one of 0, 1 or 2, or if + the length of the payload is greater than 268435455 bytes.""" + if topic is None or len(topic) == 0: + raise ValueError('Invalid topic.') + if qos<0 or qos>2: + raise ValueError('Invalid QoS level.') + if isinstance(payload, str) or isinstance(payload, bytearray): + local_payload = payload + elif sys.version_info[0] < 3 and isinstance(payload, unicode): + local_payload = payload + elif isinstance(payload, int) or isinstance(payload, float): + local_payload = str(payload) + elif payload is None: + local_payload = None + else: + raise TypeError('payload must be a string, bytearray, int, float or None.') + + if local_payload is not None and len(local_payload) > 268435455: + raise ValueError('Payload too large.') + + if self._topic_wildcard_len_check(topic) != MQTT_ERR_SUCCESS: + raise ValueError('Publish topic cannot contain wildcards.') + + local_mid = self._mid_generate() + + if qos == 0: + rc = self._send_publish(local_mid, topic, local_payload, qos, retain, False) + return (rc, local_mid) + else: + message = MQTTMessage() + message.timestamp = time.time() + + message.mid = local_mid + message.topic = topic + if local_payload is None or len(local_payload) == 0: + message.payload = None + else: + message.payload = local_payload + + message.qos = qos + message.retain = retain + message.dup = False + + self._out_message_mutex.acquire() + self._out_messages.append(message) + if self._max_inflight_messages == 0 or self._inflight_messages < self._max_inflight_messages: + self._inflight_messages = self._inflight_messages+1 + if qos == 1: + message.state = mqtt_ms_wait_for_puback + elif qos == 2: + message.state = mqtt_ms_wait_for_pubrec + self._out_message_mutex.release() + + rc = self._send_publish(message.mid, message.topic, message.payload, message.qos, message.retain, message.dup) + + # remove from inflight messages so it will be send after a connection is made + if rc is MQTT_ERR_NO_CONN: + with self._out_message_mutex: + self._inflight_messages -= 1 + message.state = mqtt_ms_publish + + return (rc, local_mid) + else: + message.state = mqtt_ms_queued; + self._out_message_mutex.release() + return (MQTT_ERR_SUCCESS, local_mid) + + def username_pw_set(self, username, password=None): + """Set a username and optionally a password for broker authentication. + + Must be called before connect() to have any effect. + Requires a broker that supports MQTT v3.1. + + username: The username to authenticate with. Need have no relationship to the client id. + password: The password to authenticate with. Optional, set to None if not required. + """ + self._username = username.encode('utf-8') + self._password = password + + def socket_factory_set(self, socket_factory): + """Set a socket factory to custom configure a different socket type for + mqtt connection. + Must be called before connect() to have any effect. + socket_factory: create_connection function which creates a socket to user's specification + """ + self._socket_factory = socket_factory + + def disconnect(self): + """Disconnect a connected client from the broker.""" + self._state_mutex.acquire() + self._state = mqtt_cs_disconnecting + self._state_mutex.release() + + self._backoffCore.stopStableConnectionTimer() + + if self._sock is None and self._ssl is None: + return MQTT_ERR_NO_CONN + + return self._send_disconnect() + + def subscribe(self, topic, qos=0): + """Subscribe the client to one or more topics. + + This function may be called in three different ways: + + Simple string and integer + ------------------------- + e.g. subscribe("my/topic", 2) + + topic: A string specifying the subscription topic to subscribe to. + qos: The desired quality of service level for the subscription. + Defaults to 0. + + String and integer tuple + ------------------------ + e.g. subscribe(("my/topic", 1)) + + topic: A tuple of (topic, qos). Both topic and qos must be present in + the tuple. + qos: Not used. + + List of string and integer tuples + ------------------------ + e.g. subscribe([("my/topic", 0), ("another/topic", 2)]) + + This allows multiple topic subscriptions in a single SUBSCRIPTION + command, which is more efficient than using multiple calls to + subscribe(). + + topic: A list of tuple of format (topic, qos). Both topic and qos must + be present in all of the tuples. + qos: Not used. + + The function returns a tuple (result, mid), where result is + MQTT_ERR_SUCCESS to indicate success or (MQTT_ERR_NO_CONN, None) if the + client is not currently connected. mid is the message ID for the + subscribe request. The mid value can be used to track the subscribe + request by checking against the mid argument in the on_subscribe() + callback if it is defined. + + Raises a ValueError if qos is not 0, 1 or 2, or if topic is None or has + zero string length, or if topic is not a string, tuple or list. + """ + topic_qos_list = None + if isinstance(topic, str): + if qos<0 or qos>2: + raise ValueError('Invalid QoS level.') + if topic is None or len(topic) == 0: + raise ValueError('Invalid topic.') + topic_qos_list = [(topic.encode('utf-8'), qos)] + elif isinstance(topic, tuple): + if topic[1]<0 or topic[1]>2: + raise ValueError('Invalid QoS level.') + if topic[0] is None or len(topic[0]) == 0 or not isinstance(topic[0], str): + raise ValueError('Invalid topic.') + topic_qos_list = [(topic[0].encode('utf-8'), topic[1])] + elif isinstance(topic, list): + topic_qos_list = [] + for t in topic: + if t[1]<0 or t[1]>2: + raise ValueError('Invalid QoS level.') + if t[0] is None or len(t[0]) == 0 or not isinstance(t[0], str): + raise ValueError('Invalid topic.') + topic_qos_list.append((t[0].encode('utf-8'), t[1])) + + if topic_qos_list is None: + raise ValueError("No topic specified, or incorrect topic type.") + + if self._sock is None and self._ssl is None: + return (MQTT_ERR_NO_CONN, None) + + return self._send_subscribe(False, topic_qos_list) + + def unsubscribe(self, topic): + """Unsubscribe the client from one or more topics. + + topic: A single string, or list of strings that are the subscription + topics to unsubscribe from. + + Returns a tuple (result, mid), where result is MQTT_ERR_SUCCESS + to indicate success or (MQTT_ERR_NO_CONN, None) if the client is not + currently connected. + mid is the message ID for the unsubscribe request. The mid value can be + used to track the unsubscribe request by checking against the mid + argument in the on_unsubscribe() callback if it is defined. + + Raises a ValueError if topic is None or has zero string length, or is + not a string or list. + """ + topic_list = None + if topic is None: + raise ValueError('Invalid topic.') + if isinstance(topic, str): + if len(topic) == 0: + raise ValueError('Invalid topic.') + topic_list = [topic.encode('utf-8')] + elif isinstance(topic, list): + topic_list = [] + for t in topic: + if len(t) == 0 or not isinstance(t, str): + raise ValueError('Invalid topic.') + topic_list.append(t.encode('utf-8')) + + if topic_list is None: + raise ValueError("No topic specified, or incorrect topic type.") + + if self._sock is None and self._ssl is None: + return (MQTT_ERR_NO_CONN, None) + + return self._send_unsubscribe(False, topic_list) + + def loop_read(self, max_packets=1): + """Process read network events. Use in place of calling loop() if you + wish to handle your client reads as part of your own application. + + Use socket() to obtain the client socket to call select() or equivalent + on. + + Do not use if you are using the threaded interface loop_start().""" + if self._sock is None and self._ssl is None: + return MQTT_ERR_NO_CONN + + max_packets = len(self._out_messages) + len(self._in_messages) + if max_packets < 1: + max_packets = 1 + + for i in range(0, max_packets): + rc = self._packet_read() + if rc > 0: + return self._loop_rc_handle(rc) + elif rc == MQTT_ERR_AGAIN: + return MQTT_ERR_SUCCESS + return MQTT_ERR_SUCCESS + + def loop_write(self, max_packets=1): + """Process read network events. Use in place of calling loop() if you + wish to handle your client reads as part of your own application. + + Use socket() to obtain the client socket to call select() or equivalent + on. + + Use want_write() to determine if there is data waiting to be written. + + Do not use if you are using the threaded interface loop_start().""" + + if self._sock is None and self._ssl is None: + return MQTT_ERR_NO_CONN + + max_packets = len(self._out_packet) + 1 + if max_packets < 1: + max_packets = 1 + + for i in range(0, max_packets): + rc = self._packet_write() + if rc > 0: + return self._loop_rc_handle(rc) + elif rc == MQTT_ERR_AGAIN: + return MQTT_ERR_SUCCESS + return MQTT_ERR_SUCCESS + + def want_write(self): + """Call to determine if there is network data waiting to be written. + Useful if you are calling select() yourself rather than using loop(). + """ + if self._current_out_packet or len(self._out_packet) > 0: + return True + else: + return False + + def loop_misc(self): + """Process miscellaneous network events. Use in place of calling loop() if you + wish to call select() or equivalent on. + + Do not use if you are using the threaded interface loop_start().""" + if self._sock is None and self._ssl is None: + return MQTT_ERR_NO_CONN + + now = time.time() + self._check_keepalive() + if self._last_retry_check+1 < now: + # Only check once a second at most + self._message_retry_check() + self._last_retry_check = now + + if self._ping_t > 0 and now - self._ping_t >= self._keepalive: + # client->ping_t != 0 means we are waiting for a pingresp. + # This hasn't happened in the keepalive time so we should disconnect. + if self._ssl: + self._ssl.close() + self._ssl = None + elif self._sock: + self._sock.close() + self._sock = None + + self._callback_mutex.acquire() + if self._state == mqtt_cs_disconnecting: + rc = MQTT_ERR_SUCCESS + else: + rc = 1 + if self.on_disconnect: + self._in_callback = True + self.on_disconnect(self, self._userdata, rc) + self._in_callback = False + self._callback_mutex.release() + return MQTT_ERR_CONN_LOST + + return MQTT_ERR_SUCCESS + + def max_inflight_messages_set(self, inflight): + """Set the maximum number of messages with QoS>0 that can be part way + through their network flow at once. Defaults to 20.""" + if inflight < 0: + raise ValueError('Invalid inflight.') + self._max_inflight_messages = inflight + + def message_retry_set(self, retry): + """Set the timeout in seconds before a message with QoS>0 is retried. + 20 seconds by default.""" + if retry < 0: + raise ValueError('Invalid retry.') + + self._message_retry = retry + + def user_data_set(self, userdata): + """Set the user data variable passed to callbacks. May be any data type.""" + self._userdata = userdata + + def will_set(self, topic, payload=None, qos=0, retain=False): + """Set a Will to be sent by the broker in case the client disconnects unexpectedly. + + This must be called before connect() to have any effect. + + topic: The topic that the will message should be published on. + payload: The message to send as a will. If not given, or set to None a + zero length message will be used as the will. Passing an int or float + will result in the payload being converted to a string representing + that number. If you wish to send a true int/float, use struct.pack() to + create the payload you require. + qos: The quality of service level to use for the will. + retain: If set to true, the will message will be set as the "last known + good"/retained message for the topic. + + Raises a ValueError if qos is not 0, 1 or 2, or if topic is None or has + zero string length. + """ + if topic is None or len(topic) == 0: + raise ValueError('Invalid topic.') + if qos<0 or qos>2: + raise ValueError('Invalid QoS level.') + if isinstance(payload, str): + self._will_payload = payload.encode('utf-8') + elif isinstance(payload, bytearray): + self._will_payload = payload + elif isinstance(payload, int) or isinstance(payload, float): + self._will_payload = str(payload) + elif payload is None: + self._will_payload = None + else: + raise TypeError('payload must be a string, bytearray, int, float or None.') + + self._will = True + self._will_topic = topic.encode('utf-8') + self._will_qos = qos + self._will_retain = retain + + def will_clear(self): + """ Removes a will that was previously configured with will_set(). + + Must be called before connect() to have any effect.""" + self._will = False + self._will_topic = "" + self._will_payload = None + self._will_qos = 0 + self._will_retain = False + + def socket(self): + """Return the socket or ssl object for this client.""" + if self._ssl: + if self._useSecuredWebsocket: + return self._ssl.getSSLSocket() + else: + return self._ssl + else: + return self._sock + + def loop_forever(self, timeout=1.0, max_packets=1, retry_first_connection=False): + """This function call loop() for you in an infinite blocking loop. It + is useful for the case where you only want to run the MQTT client loop + in your program. + + loop_forever() will handle reconnecting for you. If you call + disconnect() in a callback it will return. + + + timeout: The time in seconds to wait for incoming/outgoing network + traffic before timing out and returning. + max_packets: Not currently used. + retry_first_connection: Should the first connection attempt be retried on failure. + + Raises socket.error on first connection failures unless retry_first_connection=True + """ + + run = True + + while run: + if self._state == mqtt_cs_connect_async: + try: + self.reconnect() + except socket.error: + if not retry_first_connection: + raise + self._easy_log(MQTT_LOG_DEBUG, "Connection failed, retrying") + self._backoffCore.backOff() + # time.sleep(1) + else: + break + + while run: + rc = MQTT_ERR_SUCCESS + while rc == MQTT_ERR_SUCCESS: + rc = self.loop(timeout, max_packets) + # We don't need to worry about locking here, because we've + # either called loop_forever() when in single threaded mode, or + # in multi threaded mode when loop_stop() has been called and + # so no other threads can access _current_out_packet, + # _out_packet or _messages. + if (self._thread_terminate is True + and self._current_out_packet is None + and len(self._out_packet) == 0 + and len(self._out_messages) == 0): + + rc = 1 + run = False + + self._state_mutex.acquire() + if self._state == mqtt_cs_disconnecting or run is False or self._thread_terminate is True: + run = False + self._state_mutex.release() + else: + self._state_mutex.release() + self._backoffCore.backOff() + # time.sleep(1) + + self._state_mutex.acquire() + if self._state == mqtt_cs_disconnecting or run is False or self._thread_terminate is True: + run = False + self._state_mutex.release() + else: + self._state_mutex.release() + try: + self.reconnect() + except socket.error as err: + pass + + return rc + + def loop_start(self): + """This is part of the threaded client interface. Call this once to + start a new thread to process network traffic. This provides an + alternative to repeatedly calling loop() yourself. + """ + if self._thread is not None: + return MQTT_ERR_INVAL + + self._thread_terminate = False + self._thread = threading.Thread(target=self._thread_main) + self._thread.daemon = True + self._thread.start() + + def loop_stop(self, force=False): + """This is part of the threaded client interface. Call this once to + stop the network thread previously created with loop_start(). This call + will block until the network thread finishes. + + The force parameter is currently ignored. + """ + if self._thread is None: + return MQTT_ERR_INVAL + + self._thread_terminate = True + self._thread.join() + self._thread = None + + def message_callback_add(self, sub, callback): + """Register a message callback for a specific topic. + Messages that match 'sub' will be passed to 'callback'. Any + non-matching messages will be passed to the default on_message + callback. + + Call multiple times with different 'sub' to define multiple topic + specific callbacks. + + Topic specific callbacks may be removed with + message_callback_remove().""" + if callback is None or sub is None: + raise ValueError("sub and callback must both be defined.") + + self._callback_mutex.acquire() + for i in range(0, len(self.on_message_filtered)): + if self.on_message_filtered[i][0] == sub: + self.on_message_filtered[i] = (sub, callback) + self._callback_mutex.release() + return + + self.on_message_filtered.append((sub, callback)) + self._callback_mutex.release() + + def message_callback_remove(self, sub): + """Remove a message callback previously registered with + message_callback_add().""" + if sub is None: + raise ValueError("sub must defined.") + + self._callback_mutex.acquire() + for i in range(0, len(self.on_message_filtered)): + if self.on_message_filtered[i][0] == sub: + self.on_message_filtered.pop(i) + self._callback_mutex.release() + return + self._callback_mutex.release() + + # ============================================================ + # Private functions + # ============================================================ + + def _loop_rc_handle(self, rc): + if rc: + if self._ssl: + self._ssl.close() + self._ssl = None + elif self._sock: + self._sock.close() + self._sock = None + + self._state_mutex.acquire() + if self._state == mqtt_cs_disconnecting: + rc = MQTT_ERR_SUCCESS + self._state_mutex.release() + self._callback_mutex.acquire() + if self.on_disconnect: + self._in_callback = True + self.on_disconnect(self, self._userdata, rc) + self._in_callback = False + + self._callback_mutex.release() + return rc + + def _packet_read(self): + # This gets called if pselect() indicates that there is network data + # available - ie. at least one byte. What we do depends on what data we + # already have. + # If we've not got a command, attempt to read one and save it. This should + # always work because it's only a single byte. + # Then try to read the remaining length. This may fail because it is may + # be more than one byte - will need to save data pending next read if it + # does fail. + # Then try to read the remaining payload, where 'payload' here means the + # combined variable header and actual payload. This is the most likely to + # fail due to longer length, so save current data and current position. + # After all data is read, send to _mqtt_handle_packet() to deal with. + # Finally, free the memory and reset everything to starting conditions. + if self._in_packet['command'] == 0: + try: + if self._ssl: + command = self._ssl.read(1) + else: + command = self._sock.recv(1) + except socket.error as err: + if self._ssl and (err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE): + return MQTT_ERR_AGAIN + if err.errno == EAGAIN: + return MQTT_ERR_AGAIN + print(err) + return 1 + else: + if len(command) == 0: + return 1 + command = struct.unpack("!B", command) + self._in_packet['command'] = command[0] + + if self._in_packet['have_remaining'] == 0: + # Read remaining + # Algorithm for decoding taken from pseudo code at + # http://publib.boulder.ibm.com/infocenter/wmbhelp/v6r0m0/topic/com.ibm.etools.mft.doc/ac10870_.htm + while True: + try: + if self._ssl: + byte = self._ssl.read(1) + else: + byte = self._sock.recv(1) + except socket.error as err: + if self._ssl and (err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE): + return MQTT_ERR_AGAIN + if err.errno == EAGAIN: + return MQTT_ERR_AGAIN + print(err) + return 1 + else: + byte = struct.unpack("!B", byte) + byte = byte[0] + self._in_packet['remaining_count'].append(byte) + # Max 4 bytes length for remaining length as defined by protocol. + # Anything more likely means a broken/malicious client. + if len(self._in_packet['remaining_count']) > 4: + return MQTT_ERR_PROTOCOL + + self._in_packet['remaining_length'] = self._in_packet['remaining_length'] + (byte & 127)*self._in_packet['remaining_mult'] + self._in_packet['remaining_mult'] = self._in_packet['remaining_mult'] * 128 + + if (byte & 128) == 0: + break + + self._in_packet['have_remaining'] = 1 + self._in_packet['to_process'] = self._in_packet['remaining_length'] + + while self._in_packet['to_process'] > 0: + try: + if self._ssl: + data = self._ssl.read(self._in_packet['to_process']) + else: + data = self._sock.recv(self._in_packet['to_process']) + except socket.error as err: + if self._ssl and (err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE): + return MQTT_ERR_AGAIN + if err.errno == EAGAIN: + return MQTT_ERR_AGAIN + print(err) + return 1 + else: + self._in_packet['to_process'] = self._in_packet['to_process'] - len(data) + self._in_packet['packet'] = self._in_packet['packet'] + data + + # All data for this packet is read. + self._in_packet['pos'] = 0 + rc = self._packet_handle() + + # Free data and reset values + self._in_packet = dict( + command=0, + have_remaining=0, + remaining_count=[], + remaining_mult=1, + remaining_length=0, + packet=b"", + to_process=0, + pos=0) + + self._msgtime_mutex.acquire() + self._last_msg_in = time.time() + self._msgtime_mutex.release() + return rc + + def _packet_write(self): + self._current_out_packet_mutex.acquire() + while self._current_out_packet: + packet = self._current_out_packet + + try: + if self._ssl: + write_length = self._ssl.write(packet['packet'][packet['pos']:]) + else: + write_length = self._sock.send(packet['packet'][packet['pos']:]) + except AttributeError: + self._current_out_packet_mutex.release() + return MQTT_ERR_SUCCESS + except socket.error as err: + self._current_out_packet_mutex.release() + if self._ssl and (err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE): + return MQTT_ERR_AGAIN + if err.errno == EAGAIN: + return MQTT_ERR_AGAIN + print(err) + return 1 + + if write_length > 0: + packet['to_process'] = packet['to_process'] - write_length + packet['pos'] = packet['pos'] + write_length + + if packet['to_process'] == 0: + if (packet['command'] & 0xF0) == PUBLISH and packet['qos'] == 0: + self._callback_mutex.acquire() + if self.on_publish: + self._in_callback = True + self.on_publish(self, self._userdata, packet['mid']) + self._in_callback = False + + self._callback_mutex.release() + + if (packet['command'] & 0xF0) == DISCONNECT: + self._current_out_packet_mutex.release() + + self._msgtime_mutex.acquire() + self._last_msg_out = time.time() + self._msgtime_mutex.release() + + self._callback_mutex.acquire() + if self.on_disconnect: + self._in_callback = True + self.on_disconnect(self, self._userdata, 0) + self._in_callback = False + self._callback_mutex.release() + + if self._ssl: + self._ssl.close() + self._ssl = None + if self._sock: + self._sock.close() + self._sock = None + return MQTT_ERR_SUCCESS + + self._out_packet_mutex.acquire() + if len(self._out_packet) > 0: + self._current_out_packet = self._out_packet.pop(0) + else: + self._current_out_packet = None + self._out_packet_mutex.release() + else: + pass # FIXME + + self._current_out_packet_mutex.release() + + self._msgtime_mutex.acquire() + self._last_msg_out = time.time() + self._msgtime_mutex.release() + return MQTT_ERR_SUCCESS + + def _easy_log(self, level, buf): + if self.on_log: + self.on_log(self, self._userdata, level, buf) + + def _check_keepalive(self): + now = time.time() + self._msgtime_mutex.acquire() + last_msg_out = self._last_msg_out + last_msg_in = self._last_msg_in + self._msgtime_mutex.release() + if (self._sock is not None or self._ssl is not None) and (now - last_msg_out >= self._keepalive or now - last_msg_in >= self._keepalive): + if self._state == mqtt_cs_connected and self._ping_t == 0: + self._send_pingreq() + self._msgtime_mutex.acquire() + self._last_msg_out = now + self._last_msg_in = now + self._msgtime_mutex.release() + else: + if self._ssl: + self._ssl.close() + self._ssl = None + elif self._sock: + self._sock.close() + self._sock = None + + if self._state == mqtt_cs_disconnecting: + rc = MQTT_ERR_SUCCESS + else: + rc = 1 + self._callback_mutex.acquire() + if self.on_disconnect: + self._in_callback = True + self.on_disconnect(self, self._userdata, rc) + self._in_callback = False + self._callback_mutex.release() + + def _mid_generate(self): + self._last_mid = self._last_mid + 1 + if self._last_mid == 65536: + self._last_mid = 1 + return self._last_mid + + def _topic_wildcard_len_check(self, topic): + # Search for + or # in a topic. Return MQTT_ERR_INVAL if found. + # Also returns MQTT_ERR_INVAL if the topic string is too long. + # Returns MQTT_ERR_SUCCESS if everything is fine. + if '+' in topic or '#' in topic or len(topic) == 0 or len(topic) > 65535: + return MQTT_ERR_INVAL + else: + return MQTT_ERR_SUCCESS + + def _send_pingreq(self): + self._easy_log(MQTT_LOG_DEBUG, "Sending PINGREQ") + rc = self._send_simple_command(PINGREQ) + if rc == MQTT_ERR_SUCCESS: + self._ping_t = time.time() + return rc + + def _send_pingresp(self): + self._easy_log(MQTT_LOG_DEBUG, "Sending PINGRESP") + return self._send_simple_command(PINGRESP) + + def _send_puback(self, mid): + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBACK (Mid: "+str(mid)+")") + return self._send_command_with_mid(PUBACK, mid, False) + + def _send_pubcomp(self, mid): + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBCOMP (Mid: "+str(mid)+")") + return self._send_command_with_mid(PUBCOMP, mid, False) + + def _pack_remaining_length(self, packet, remaining_length): + remaining_bytes = [] + while True: + byte = remaining_length % 128 + remaining_length = remaining_length // 128 + # If there are more digits to encode, set the top bit of this digit + if remaining_length > 0: + byte = byte | 0x80 + + remaining_bytes.append(byte) + packet.extend(struct.pack("!B", byte)) + if remaining_length == 0: + # FIXME - this doesn't deal with incorrectly large payloads + return packet + + def _pack_str16(self, packet, data): + if sys.version_info[0] < 3: + if isinstance(data, bytearray): + packet.extend(struct.pack("!H", len(data))) + packet.extend(data) + elif isinstance(data, str): + udata = data.encode('utf-8') + pack_format = "!H" + str(len(udata)) + "s" + packet.extend(struct.pack(pack_format, len(udata), udata)) + elif isinstance(data, unicode): + udata = data.encode('utf-8') + pack_format = "!H" + str(len(udata)) + "s" + packet.extend(struct.pack(pack_format, len(udata), udata)) + else: + raise TypeError + else: + if isinstance(data, bytearray) or isinstance(data, bytes): + packet.extend(struct.pack("!H", len(data))) + packet.extend(data) + elif isinstance(data, str): + udata = data.encode('utf-8') + pack_format = "!H" + str(len(udata)) + "s" + packet.extend(struct.pack(pack_format, len(udata), udata)) + else: + raise TypeError + + def _send_publish(self, mid, topic, payload=None, qos=0, retain=False, dup=False): + if self._sock is None and self._ssl is None: + return MQTT_ERR_NO_CONN + + utopic = topic.encode('utf-8') + command = PUBLISH | ((dup&0x1)<<3) | (qos<<1) | retain + packet = bytearray() + packet.extend(struct.pack("!B", command)) + if payload is None: + remaining_length = 2+len(utopic) + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBLISH (d"+str(dup)+", q"+str(qos)+", r"+str(int(retain))+", m"+str(mid)+", '"+topic+"' (NULL payload)") + else: + if isinstance(payload, str): + upayload = payload.encode('utf-8') + payloadlen = len(upayload) + elif isinstance(payload, bytearray): + payloadlen = len(payload) + elif isinstance(payload, unicode): + upayload = payload.encode('utf-8') + payloadlen = len(upayload) + + remaining_length = 2+len(utopic) + payloadlen + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBLISH (d"+str(dup)+", q"+str(qos)+", r"+str(int(retain))+", m"+str(mid)+", '"+topic+"', ... ("+str(payloadlen)+" bytes)") + + if qos > 0: + # For message id + remaining_length = remaining_length + 2 + + self._pack_remaining_length(packet, remaining_length) + self._pack_str16(packet, topic) + + if qos > 0: + # For message id + packet.extend(struct.pack("!H", mid)) + + if payload is not None: + if isinstance(payload, str): + pack_format = str(payloadlen) + "s" + packet.extend(struct.pack(pack_format, upayload)) + elif isinstance(payload, bytearray): + packet.extend(payload) + elif isinstance(payload, unicode): + pack_format = str(payloadlen) + "s" + packet.extend(struct.pack(pack_format, upayload)) + else: + raise TypeError('payload must be a string, unicode or a bytearray.') + + return self._packet_queue(PUBLISH, packet, mid, qos) + + def _send_pubrec(self, mid): + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBREC (Mid: "+str(mid)+")") + return self._send_command_with_mid(PUBREC, mid, False) + + def _send_pubrel(self, mid, dup=False): + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBREL (Mid: "+str(mid)+")") + return self._send_command_with_mid(PUBREL|2, mid, dup) + + def _send_command_with_mid(self, command, mid, dup): + # For PUBACK, PUBCOMP, PUBREC, and PUBREL + if dup: + command = command | 8 + + remaining_length = 2 + packet = struct.pack('!BBH', command, remaining_length, mid) + return self._packet_queue(command, packet, mid, 1) + + def _send_simple_command(self, command): + # For DISCONNECT, PINGREQ and PINGRESP + remaining_length = 0 + packet = struct.pack('!BB', command, remaining_length) + return self._packet_queue(command, packet, 0, 0) + + def _send_connect(self, keepalive, clean_session): + if self._protocol == MQTTv31: + protocol = PROTOCOL_NAMEv31 + proto_ver = 3 + else: + protocol = PROTOCOL_NAMEv311 + proto_ver = 4 + remaining_length = 2+len(protocol) + 1+1+2 + 2+len(self._client_id) + connect_flags = 0 + if clean_session: + connect_flags = connect_flags | 0x02 + + if self._will: + if self._will_payload is not None: + remaining_length = remaining_length + 2+len(self._will_topic) + 2+len(self._will_payload) + else: + remaining_length = remaining_length + 2+len(self._will_topic) + 2 + + connect_flags = connect_flags | 0x04 | ((self._will_qos&0x03) << 3) | ((self._will_retain&0x01) << 5) + + if self._username: + remaining_length = remaining_length + 2+len(self._username) + connect_flags = connect_flags | 0x80 + if self._password: + connect_flags = connect_flags | 0x40 + remaining_length = remaining_length + 2+len(self._password) + + command = CONNECT + packet = bytearray() + packet.extend(struct.pack("!B", command)) + + self._pack_remaining_length(packet, remaining_length) + packet.extend(struct.pack("!H"+str(len(protocol))+"sBBH", len(protocol), protocol, proto_ver, connect_flags, keepalive)) + + self._pack_str16(packet, self._client_id) + + if self._will: + self._pack_str16(packet, self._will_topic) + if self._will_payload is None or len(self._will_payload) == 0: + packet.extend(struct.pack("!H", 0)) + else: + self._pack_str16(packet, self._will_payload) + + if self._username: + self._pack_str16(packet, self._username) + + if self._password: + self._pack_str16(packet, self._password) + + self._keepalive = keepalive + return self._packet_queue(command, packet, 0, 0) + + def _send_disconnect(self): + return self._send_simple_command(DISCONNECT) + + def _send_subscribe(self, dup, topics): + remaining_length = 2 + for t in topics: + remaining_length = remaining_length + 2+len(t[0])+1 + + command = SUBSCRIBE | (dup<<3) | (1<<1) + packet = bytearray() + packet.extend(struct.pack("!B", command)) + self._pack_remaining_length(packet, remaining_length) + local_mid = self._mid_generate() + packet.extend(struct.pack("!H", local_mid)) + for t in topics: + self._pack_str16(packet, t[0]) + packet.extend(struct.pack("B", t[1])) + return (self._packet_queue(command, packet, local_mid, 1), local_mid) + + def _send_unsubscribe(self, dup, topics): + remaining_length = 2 + for t in topics: + remaining_length = remaining_length + 2+len(t) + + command = UNSUBSCRIBE | (dup<<3) | (1<<1) + packet = bytearray() + packet.extend(struct.pack("!B", command)) + self._pack_remaining_length(packet, remaining_length) + local_mid = self._mid_generate() + packet.extend(struct.pack("!H", local_mid)) + for t in topics: + self._pack_str16(packet, t) + return (self._packet_queue(command, packet, local_mid, 1), local_mid) + + def _message_retry_check_actual(self, messages, mutex): + mutex.acquire() + now = time.time() + for m in messages: + if m.timestamp + self._message_retry < now: + if m.state == mqtt_ms_wait_for_puback or m.state == mqtt_ms_wait_for_pubrec: + m.timestamp = now + m.dup = True + self._send_publish(m.mid, m.topic, m.payload, m.qos, m.retain, m.dup) + elif m.state == mqtt_ms_wait_for_pubrel: + m.timestamp = now + m.dup = True + self._send_pubrec(m.mid) + elif m.state == mqtt_ms_wait_for_pubcomp: + m.timestamp = now + m.dup = True + self._send_pubrel(m.mid, True) + mutex.release() + + def _message_retry_check(self): + self._message_retry_check_actual(self._out_messages, self._out_message_mutex) + self._message_retry_check_actual(self._in_messages, self._in_message_mutex) + + def _messages_reconnect_reset_out(self): + self._out_message_mutex.acquire() + self._inflight_messages = 0 + for m in self._out_messages: + m.timestamp = 0 + if self._max_inflight_messages == 0 or self._inflight_messages < self._max_inflight_messages: + if m.qos == 0: + m.state = mqtt_ms_publish + elif m.qos == 1: + #self._inflight_messages = self._inflight_messages + 1 + if m.state == mqtt_ms_wait_for_puback: + m.dup = True + m.state = mqtt_ms_publish + elif m.qos == 2: + #self._inflight_messages = self._inflight_messages + 1 + if m.state == mqtt_ms_wait_for_pubcomp: + m.state = mqtt_ms_resend_pubrel + m.dup = True + else: + if m.state == mqtt_ms_wait_for_pubrec: + m.dup = True + m.state = mqtt_ms_publish + else: + m.state = mqtt_ms_queued + self._out_message_mutex.release() + + def _messages_reconnect_reset_in(self): + self._in_message_mutex.acquire() + for m in self._in_messages: + m.timestamp = 0 + if m.qos != 2: + self._in_messages.pop(self._in_messages.index(m)) + else: + # Preserve current state + pass + self._in_message_mutex.release() + + def _messages_reconnect_reset(self): + self._messages_reconnect_reset_out() + self._messages_reconnect_reset_in() + + def _packet_queue(self, command, packet, mid, qos): + mpkt = dict( + command = command, + mid = mid, + qos = qos, + pos = 0, + to_process = len(packet), + packet = packet) + + self._out_packet_mutex.acquire() + self._out_packet.append(mpkt) + if self._current_out_packet_mutex.acquire(False): + if self._current_out_packet is None and len(self._out_packet) > 0: + self._current_out_packet = self._out_packet.pop(0) + self._current_out_packet_mutex.release() + self._out_packet_mutex.release() + + # Write a single byte to sockpairW (connected to sockpairR) to break + # out of select() if in threaded mode. + try: + self._sockpairW.send(sockpair_data) + except socket.error as err: + if err.errno != EAGAIN: + raise + + if not self._in_callback and self._thread is None: + return self.loop_write() + else: + return MQTT_ERR_SUCCESS + + def _packet_handle(self): + cmd = self._in_packet['command']&0xF0 + if cmd == PINGREQ: + return self._handle_pingreq() + elif cmd == PINGRESP: + return self._handle_pingresp() + elif cmd == PUBACK: + return self._handle_pubackcomp("PUBACK") + elif cmd == PUBCOMP: + return self._handle_pubackcomp("PUBCOMP") + elif cmd == PUBLISH: + return self._handle_publish() + elif cmd == PUBREC: + return self._handle_pubrec() + elif cmd == PUBREL: + return self._handle_pubrel() + elif cmd == CONNACK: + return self._handle_connack() + elif cmd == SUBACK: + return self._handle_suback() + elif cmd == UNSUBACK: + return self._handle_unsuback() + else: + # If we don't recognise the command, return an error straight away. + self._easy_log(MQTT_LOG_ERR, "Error: Unrecognised command "+str(cmd)) + return MQTT_ERR_PROTOCOL + + def _handle_pingreq(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 0: + return MQTT_ERR_PROTOCOL + + self._easy_log(MQTT_LOG_DEBUG, "Received PINGREQ") + return self._send_pingresp() + + def _handle_pingresp(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 0: + return MQTT_ERR_PROTOCOL + + # No longer waiting for a PINGRESP. + self._ping_t = 0 + self._easy_log(MQTT_LOG_DEBUG, "Received PINGRESP") + return MQTT_ERR_SUCCESS + + def _handle_connack(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 2: + return MQTT_ERR_PROTOCOL + + if len(self._in_packet['packet']) != 2: + return MQTT_ERR_PROTOCOL + + (flags, result) = struct.unpack("!BB", self._in_packet['packet']) + if result == CONNACK_REFUSED_PROTOCOL_VERSION and self._protocol == MQTTv311: + self._easy_log(MQTT_LOG_DEBUG, "Received CONNACK ("+str(flags)+", "+str(result)+"), attempting downgrade to MQTT v3.1.") + # Downgrade to MQTT v3.1 + self._protocol = MQTTv31 + return self.reconnect() + + if result == 0: + self._state = mqtt_cs_connected + + self._easy_log(MQTT_LOG_DEBUG, "Received CONNACK ("+str(flags)+", "+str(result)+")") + self._callback_mutex.acquire() + if self.on_connect: + self._in_callback = True + + if sys.version_info[0] < 3: + argcount = self.on_connect.func_code.co_argcount + else: + argcount = self.on_connect.__code__.co_argcount + + if argcount == 3: + self.on_connect(self, self._userdata, result) + else: + flags_dict = dict() + flags_dict['session present'] = flags & 0x01 + self.on_connect(self, self._userdata, flags_dict, result) + self._in_callback = False + self._callback_mutex.release() + + # Start counting for stable connection + self._backoffCore.startStableConnectionTimer() + + if result == 0: + rc = 0 + self._out_message_mutex.acquire() + for m in self._out_messages: + m.timestamp = time.time() + if m.state == mqtt_ms_queued: + self.loop_write() # Process outgoing messages that have just been queued up + self._out_message_mutex.release() + return MQTT_ERR_SUCCESS + + if m.qos == 0: + self._in_callback = True # Don't call loop_write after _send_publish() + rc = self._send_publish(m.mid, m.topic, m.payload, m.qos, m.retain, m.dup) + self._in_callback = False + if rc != 0: + self._out_message_mutex.release() + return rc + elif m.qos == 1: + if m.state == mqtt_ms_publish: + self._inflight_messages = self._inflight_messages + 1 + m.state = mqtt_ms_wait_for_puback + self._in_callback = True # Don't call loop_write after _send_publish() + rc = self._send_publish(m.mid, m.topic, m.payload, m.qos, m.retain, m.dup) + self._in_callback = False + if rc != 0: + self._out_message_mutex.release() + return rc + elif m.qos == 2: + if m.state == mqtt_ms_publish: + self._inflight_messages = self._inflight_messages + 1 + m.state = mqtt_ms_wait_for_pubrec + self._in_callback = True # Don't call loop_write after _send_publish() + rc = self._send_publish(m.mid, m.topic, m.payload, m.qos, m.retain, m.dup) + self._in_callback = False + if rc != 0: + self._out_message_mutex.release() + return rc + elif m.state == mqtt_ms_resend_pubrel: + self._inflight_messages = self._inflight_messages + 1 + m.state = mqtt_ms_wait_for_pubcomp + self._in_callback = True # Don't call loop_write after _send_pubrel() + rc = self._send_pubrel(m.mid, m.dup) + self._in_callback = False + if rc != 0: + self._out_message_mutex.release() + return rc + self.loop_write() # Process outgoing messages that have just been queued up + self._out_message_mutex.release() + return rc + elif result > 0 and result < 6: + return MQTT_ERR_CONN_REFUSED + else: + return MQTT_ERR_PROTOCOL + + def _handle_suback(self): + self._easy_log(MQTT_LOG_DEBUG, "Received SUBACK") + pack_format = "!H" + str(len(self._in_packet['packet'])-2) + 's' + (mid, packet) = struct.unpack(pack_format, self._in_packet['packet']) + pack_format = "!" + "B"*len(packet) + granted_qos = struct.unpack(pack_format, packet) + + self._callback_mutex.acquire() + if self.on_subscribe: + self._in_callback = True + self.on_subscribe(self, self._userdata, mid, granted_qos) + self._in_callback = False + self._callback_mutex.release() + + return MQTT_ERR_SUCCESS + + def _handle_publish(self): + rc = 0 + + header = self._in_packet['command'] + message = MQTTMessage() + message.dup = (header & 0x08)>>3 + message.qos = (header & 0x06)>>1 + message.retain = (header & 0x01) + + pack_format = "!H" + str(len(self._in_packet['packet'])-2) + 's' + (slen, packet) = struct.unpack(pack_format, self._in_packet['packet']) + pack_format = '!' + str(slen) + 's' + str(len(packet)-slen) + 's' + (message.topic, packet) = struct.unpack(pack_format, packet) + + if len(message.topic) == 0: + return MQTT_ERR_PROTOCOL + + if sys.version_info[0] >= 3: + message.topic = message.topic.decode('utf-8') + + if message.qos > 0: + pack_format = "!H" + str(len(packet)-2) + 's' + (message.mid, packet) = struct.unpack(pack_format, packet) + + message.payload = packet + + self._easy_log( + MQTT_LOG_DEBUG, + "Received PUBLISH (d"+str(message.dup)+ + ", q"+str(message.qos)+", r"+str(message.retain)+ + ", m"+str(message.mid)+", '"+message.topic+ + "', ... ("+str(len(message.payload))+" bytes)") + + message.timestamp = time.time() + if message.qos == 0: + self._handle_on_message(message) + return MQTT_ERR_SUCCESS + elif message.qos == 1: + rc = self._send_puback(message.mid) + self._handle_on_message(message) + return rc + elif message.qos == 2: + rc = self._send_pubrec(message.mid) + message.state = mqtt_ms_wait_for_pubrel + self._in_message_mutex.acquire() + self._in_messages.append(message) + self._in_message_mutex.release() + return rc + else: + return MQTT_ERR_PROTOCOL + + def _handle_pubrel(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 2: + return MQTT_ERR_PROTOCOL + + if len(self._in_packet['packet']) != 2: + return MQTT_ERR_PROTOCOL + + mid = struct.unpack("!H", self._in_packet['packet']) + mid = mid[0] + self._easy_log(MQTT_LOG_DEBUG, "Received PUBREL (Mid: "+str(mid)+")") + + self._in_message_mutex.acquire() + for i in range(len(self._in_messages)): + if self._in_messages[i].mid == mid: + + # Only pass the message on if we have removed it from the queue - this + # prevents multiple callbacks for the same message. + self._handle_on_message(self._in_messages[i]) + self._in_messages.pop(i) + self._inflight_messages = self._inflight_messages - 1 + if self._max_inflight_messages > 0: + self._out_message_mutex.acquire() + rc = self._update_inflight() + self._out_message_mutex.release() + if rc != MQTT_ERR_SUCCESS: + self._in_message_mutex.release() + return rc + + self._in_message_mutex.release() + return self._send_pubcomp(mid) + + self._in_message_mutex.release() + return MQTT_ERR_SUCCESS + + def _update_inflight(self): + # Dont lock message_mutex here + for m in self._out_messages: + if self._inflight_messages < self._max_inflight_messages: + if m.qos > 0 and m.state == mqtt_ms_queued: + self._inflight_messages = self._inflight_messages + 1 + if m.qos == 1: + m.state = mqtt_ms_wait_for_puback + elif m.qos == 2: + m.state = mqtt_ms_wait_for_pubrec + rc = self._send_publish(m.mid, m.topic, m.payload, m.qos, m.retain, m.dup) + if rc != 0: + return rc + else: + return MQTT_ERR_SUCCESS + return MQTT_ERR_SUCCESS + + def _handle_pubrec(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 2: + return MQTT_ERR_PROTOCOL + + mid = struct.unpack("!H", self._in_packet['packet']) + mid = mid[0] + self._easy_log(MQTT_LOG_DEBUG, "Received PUBREC (Mid: "+str(mid)+")") + + self._out_message_mutex.acquire() + for m in self._out_messages: + if m.mid == mid: + m.state = mqtt_ms_wait_for_pubcomp + m.timestamp = time.time() + self._out_message_mutex.release() + return self._send_pubrel(mid, False) + + self._out_message_mutex.release() + return MQTT_ERR_SUCCESS + + def _handle_unsuback(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 2: + return MQTT_ERR_PROTOCOL + + mid = struct.unpack("!H", self._in_packet['packet']) + mid = mid[0] + self._easy_log(MQTT_LOG_DEBUG, "Received UNSUBACK (Mid: "+str(mid)+")") + self._callback_mutex.acquire() + if self.on_unsubscribe: + self._in_callback = True + self.on_unsubscribe(self, self._userdata, mid) + self._in_callback = False + self._callback_mutex.release() + return MQTT_ERR_SUCCESS + + def _handle_pubackcomp(self, cmd): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 2: + return MQTT_ERR_PROTOCOL + + mid = struct.unpack("!H", self._in_packet['packet']) + mid = mid[0] + self._easy_log(MQTT_LOG_DEBUG, "Received "+cmd+" (Mid: "+str(mid)+")") + + self._out_message_mutex.acquire() + for i in range(len(self._out_messages)): + try: + if self._out_messages[i].mid == mid: + # Only inform the client the message has been sent once. + self._callback_mutex.acquire() + if self.on_publish: + self._out_message_mutex.release() + self._in_callback = True + self.on_publish(self, self._userdata, mid) + self._in_callback = False + self._out_message_mutex.acquire() + + self._callback_mutex.release() + self._out_messages.pop(i) + self._inflight_messages = self._inflight_messages - 1 + if self._max_inflight_messages > 0: + rc = self._update_inflight() + if rc != MQTT_ERR_SUCCESS: + self._out_message_mutex.release() + return rc + self._out_message_mutex.release() + return MQTT_ERR_SUCCESS + except IndexError: + # Have removed item so i>count. + # Not really an error. + pass + + self._out_message_mutex.release() + return MQTT_ERR_SUCCESS + + def _handle_on_message(self, message): + self._callback_mutex.acquire() + matched = False + for t in self.on_message_filtered: + if topic_matches_sub(t[0], message.topic): + self._in_callback = True + t[1](self, self._userdata, message) + self._in_callback = False + matched = True + + if matched == False and self.on_message: + self._in_callback = True + self.on_message(self, self._userdata, message) + self._in_callback = False + + self._callback_mutex.release() + + def _thread_main(self): + self._state_mutex.acquire() + if self._state == mqtt_cs_connect_async: + self._state_mutex.release() + self.reconnect() + else: + self._state_mutex.release() + + self.loop_forever() + + def _host_matches_cert(self, host, cert_host): + if cert_host[0:2] == "*.": + if cert_host.count("*") != 1: + return False + + host_match = host.split(".", 1)[1] + cert_match = cert_host.split(".", 1)[1] + if host_match == cert_match: + return True + else: + return False + else: + if host == cert_host: + return True + else: + return False + + def _tls_match_hostname(self): + try: + cert = self._ssl.getpeercert() + except AttributeError: + # the getpeercert can throw Attribute error: object has no attribute 'peer_certificate' + # Don't let that crash the whole client. See also: http://bugs.python.org/issue13721 + raise ssl.SSLError('Not connected') + + san = cert.get('subjectAltName') + if san: + have_san_dns = False + for (key, value) in san: + if key == 'DNS': + have_san_dns = True + if self._host_matches_cert(self._host.lower(), value.lower()) == True: + return + if key == 'IP Address': + have_san_dns = True + if value.lower().strip() == self._host.lower().strip(): + return + + if have_san_dns: + # Only check subject if subjectAltName dns not found. + raise ssl.SSLError('Certificate subject does not match remote hostname.') + subject = cert.get('subject') + if subject: + for ((key, value),) in subject: + if key == 'commonName': + if self._host_matches_cert(self._host.lower(), value.lower()) == True: + return + + raise ssl.SSLError('Certificate subject does not match remote hostname.') + + +# Compatibility class for easy porting from mosquitto.py. +class Mosquitto(Client): + def __init__(self, client_id="", clean_session=True, userdata=None): + super(Mosquitto, self).__init__(client_id, clean_session, userdata) diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/shadow/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/shadow/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/shadow/deviceShadow.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/shadow/deviceShadow.py new file mode 100644 index 0000000..f58240a --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/shadow/deviceShadow.py @@ -0,0 +1,430 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import json +import logging +import uuid +from threading import Timer, Lock, Thread + + +class _shadowRequestToken: + + URN_PREFIX_LENGTH = 9 + + def getNextToken(self): + return uuid.uuid4().urn[self.URN_PREFIX_LENGTH:] # We only need the uuid digits, not the urn prefix + + +class _basicJSONParser: + + def setString(self, srcString): + self._rawString = srcString + self._dictionObject = None + + def regenerateString(self): + return json.dumps(self._dictionaryObject) + + def getAttributeValue(self, srcAttributeKey): + return self._dictionaryObject.get(srcAttributeKey) + + def setAttributeValue(self, srcAttributeKey, srcAttributeValue): + self._dictionaryObject[srcAttributeKey] = srcAttributeValue + + def validateJSON(self): + try: + self._dictionaryObject = json.loads(self._rawString) + except ValueError: + return False + return True + + +class deviceShadow: + _logger = logging.getLogger(__name__) + + def __init__(self, srcShadowName, srcIsPersistentSubscribe, srcShadowManager): + """ + + The class that denotes a local/client-side device shadow instance. + + Users can perform shadow operations on this instance to retrieve and modify the + corresponding shadow JSON document in AWS IoT Cloud. The following shadow operations + are available: + + - Get + + - Update + + - Delete + + - Listen on delta + + - Cancel listening on delta + + This is returned from :code:`AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTShadowClient.createShadowWithName` function call. + No need to call directly from user scripts. + + """ + if srcShadowName is None or srcIsPersistentSubscribe is None or srcShadowManager is None: + raise TypeError("None type inputs detected.") + self._shadowName = srcShadowName + # Tool handler + self._shadowManagerHandler = srcShadowManager + self._basicJSONParserHandler = _basicJSONParser() + self._tokenHandler = _shadowRequestToken() + # Properties + self._isPersistentSubscribe = srcIsPersistentSubscribe + self._lastVersionInSync = -1 # -1 means not initialized + self._isGetSubscribed = False + self._isUpdateSubscribed = False + self._isDeleteSubscribed = False + self._shadowSubscribeCallbackTable = dict() + self._shadowSubscribeCallbackTable["get"] = None + self._shadowSubscribeCallbackTable["delete"] = None + self._shadowSubscribeCallbackTable["update"] = None + self._shadowSubscribeCallbackTable["delta"] = None + self._shadowSubscribeStatusTable = dict() + self._shadowSubscribeStatusTable["get"] = 0 + self._shadowSubscribeStatusTable["delete"] = 0 + self._shadowSubscribeStatusTable["update"] = 0 + self._tokenPool = dict() + self._dataStructureLock = Lock() + + def _doNonPersistentUnsubscribe(self, currentAction): + self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, currentAction) + self._logger.info("Unsubscribed to " + currentAction + " accepted/rejected topics for deviceShadow: " + self._shadowName) + + def generalCallback(self, client, userdata, message): + # In Py3.x, message.payload comes in as a bytes(string) + # json.loads needs a string input + with self._dataStructureLock: + currentTopic = message.topic + currentAction = self._parseTopicAction(currentTopic) # get/delete/update/delta + currentType = self._parseTopicType(currentTopic) # accepted/rejected/delta + payloadUTF8String = message.payload.decode('utf-8') + # get/delete/update: Need to deal with token, timer and unsubscribe + if currentAction in ["get", "delete", "update"]: + # Check for token + self._basicJSONParserHandler.setString(payloadUTF8String) + if self._basicJSONParserHandler.validateJSON(): # Filter out invalid JSON + currentToken = self._basicJSONParserHandler.getAttributeValue(u"clientToken") + if currentToken is not None: + self._logger.debug("shadow message clientToken: " + currentToken) + if currentToken is not None and currentToken in self._tokenPool.keys(): # Filter out JSON without the desired token + # Sync local version when it is an accepted response + self._logger.debug("Token is in the pool. Type: " + currentType) + if currentType == "accepted": + incomingVersion = self._basicJSONParserHandler.getAttributeValue(u"version") + # If it is get/update accepted response, we need to sync the local version + if incomingVersion is not None and incomingVersion > self._lastVersionInSync and currentAction != "delete": + self._lastVersionInSync = incomingVersion + # If it is a delete accepted, we need to reset the version + else: + self._lastVersionInSync = -1 # The version will always be synced for the next incoming delta/GU-accepted response + # Cancel the timer and clear the token + self._tokenPool[currentToken].cancel() + del self._tokenPool[currentToken] + # Need to unsubscribe? + self._shadowSubscribeStatusTable[currentAction] -= 1 + if not self._isPersistentSubscribe and self._shadowSubscribeStatusTable.get(currentAction) <= 0: + self._shadowSubscribeStatusTable[currentAction] = 0 + processNonPersistentUnsubscribe = Thread(target=self._doNonPersistentUnsubscribe, args=[currentAction]) + processNonPersistentUnsubscribe.start() + # Custom callback + if self._shadowSubscribeCallbackTable.get(currentAction) is not None: + processCustomCallback = Thread(target=self._shadowSubscribeCallbackTable[currentAction], args=[payloadUTF8String, currentType, currentToken]) + processCustomCallback.start() + # delta: Watch for version + else: + currentType += "/" + self._parseTopicShadowName(currentTopic) + # Sync local version + self._basicJSONParserHandler.setString(payloadUTF8String) + if self._basicJSONParserHandler.validateJSON(): # Filter out JSON without version + incomingVersion = self._basicJSONParserHandler.getAttributeValue(u"version") + if incomingVersion is not None and incomingVersion > self._lastVersionInSync: + self._lastVersionInSync = incomingVersion + # Custom callback + if self._shadowSubscribeCallbackTable.get(currentAction) is not None: + processCustomCallback = Thread(target=self._shadowSubscribeCallbackTable[currentAction], args=[payloadUTF8String, currentType, None]) + processCustomCallback.start() + + def _parseTopicAction(self, srcTopic): + ret = None + fragments = srcTopic.split('/') + if fragments[5] == "delta": + ret = "delta" + else: + ret = fragments[4] + return ret + + def _parseTopicType(self, srcTopic): + fragments = srcTopic.split('/') + return fragments[5] + + def _parseTopicShadowName(self, srcTopic): + fragments = srcTopic.split('/') + return fragments[2] + + def _timerHandler(self, srcActionName, srcToken): + with self._dataStructureLock: + # Don't crash if we try to remove an unknown token + if srcToken not in self._tokenPool: + self._logger.warn('Tried to remove non-existent token from pool: %s' % str(srcToken)) + return + # Remove the token + del self._tokenPool[srcToken] + # Need to unsubscribe? + self._shadowSubscribeStatusTable[srcActionName] -= 1 + if not self._isPersistentSubscribe and self._shadowSubscribeStatusTable.get(srcActionName) <= 0: + self._shadowSubscribeStatusTable[srcActionName] = 0 + self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, srcActionName) + # Notify time-out issue + if self._shadowSubscribeCallbackTable.get(srcActionName) is not None: + self._logger.info("Shadow request with token: " + str(srcToken) + " has timed out.") + self._shadowSubscribeCallbackTable[srcActionName]("REQUEST TIME OUT", "timeout", srcToken) + + def shadowGet(self, srcCallback, srcTimeout): + """ + **Description** + + Retrieve the device shadow JSON document from AWS IoT by publishing an empty JSON document to the + corresponding shadow topics. Shadow response topics will be subscribed to receive responses from + AWS IoT regarding the result of the get operation. Retrieved shadow JSON document will be available + in the registered callback. If no response is received within the provided timeout, a timeout + notification will be passed into the registered callback. + + **Syntax** + + .. code:: python + + # Retrieve the shadow JSON document from AWS IoT, with a timeout set to 5 seconds + BotShadow.shadowGet(customCallback, 5) + + **Parameters** + + *srcCallback* - Function to be called when the response for this shadow request comes back. Should + be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the + JSON document returned, :code:`responseStatus` indicates whether the request has been accepted, + rejected or is a delta message, :code:`token` is the token used for tracing in this request. + + *srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout, + a timeout notification will be generated and put into the registered callback to notify users. + + **Returns** + + The token used for tracing in this shadow request. + + """ + with self._dataStructureLock: + # Update callback data structure + self._shadowSubscribeCallbackTable["get"] = srcCallback + # Update number of pending feedback + self._shadowSubscribeStatusTable["get"] += 1 + # clientToken + currentToken = self._tokenHandler.getNextToken() + self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["get", currentToken]) + self._basicJSONParserHandler.setString("{}") + self._basicJSONParserHandler.validateJSON() + self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken) + currentPayload = self._basicJSONParserHandler.regenerateString() + # Two subscriptions + if not self._isPersistentSubscribe or not self._isGetSubscribed: + self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "get", self.generalCallback) + self._isGetSubscribed = True + self._logger.info("Subscribed to get accepted/rejected topics for deviceShadow: " + self._shadowName) + # One publish + self._shadowManagerHandler.basicShadowPublish(self._shadowName, "get", currentPayload) + # Start the timer + self._tokenPool[currentToken].start() + return currentToken + + def shadowDelete(self, srcCallback, srcTimeout): + """ + **Description** + + Delete the device shadow from AWS IoT by publishing an empty JSON document to the corresponding + shadow topics. Shadow response topics will be subscribed to receive responses from AWS IoT + regarding the result of the get operation. Responses will be available in the registered callback. + If no response is received within the provided timeout, a timeout notification will be passed into + the registered callback. + + **Syntax** + + .. code:: python + + # Delete the device shadow from AWS IoT, with a timeout set to 5 seconds + BotShadow.shadowDelete(customCallback, 5) + + **Parameters** + + *srcCallback* - Function to be called when the response for this shadow request comes back. Should + be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the + JSON document returned, :code:`responseStatus` indicates whether the request has been accepted, + rejected or is a delta message, :code:`token` is the token used for tracing in this request. + + *srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout, + a timeout notification will be generated and put into the registered callback to notify users. + + **Returns** + + The token used for tracing in this shadow request. + + """ + with self._dataStructureLock: + # Update callback data structure + self._shadowSubscribeCallbackTable["delete"] = srcCallback + # Update number of pending feedback + self._shadowSubscribeStatusTable["delete"] += 1 + # clientToken + currentToken = self._tokenHandler.getNextToken() + self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["delete", currentToken]) + self._basicJSONParserHandler.setString("{}") + self._basicJSONParserHandler.validateJSON() + self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken) + currentPayload = self._basicJSONParserHandler.regenerateString() + # Two subscriptions + if not self._isPersistentSubscribe or not self._isDeleteSubscribed: + self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "delete", self.generalCallback) + self._isDeleteSubscribed = True + self._logger.info("Subscribed to delete accepted/rejected topics for deviceShadow: " + self._shadowName) + # One publish + self._shadowManagerHandler.basicShadowPublish(self._shadowName, "delete", currentPayload) + # Start the timer + self._tokenPool[currentToken].start() + return currentToken + + def shadowUpdate(self, srcJSONPayload, srcCallback, srcTimeout): + """ + **Description** + + Update the device shadow JSON document string from AWS IoT by publishing the provided JSON + document to the corresponding shadow topics. Shadow response topics will be subscribed to + receive responses from AWS IoT regarding the result of the get operation. Response will be + available in the registered callback. If no response is received within the provided timeout, + a timeout notification will be passed into the registered callback. + + **Syntax** + + .. code:: python + + # Update the shadow JSON document from AWS IoT, with a timeout set to 5 seconds + BotShadow.shadowUpdate(newShadowJSONDocumentString, customCallback, 5) + + **Parameters** + + *srcJSONPayload* - JSON document string used to update shadow JSON document in AWS IoT. + + *srcCallback* - Function to be called when the response for this shadow request comes back. Should + be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the + JSON document returned, :code:`responseStatus` indicates whether the request has been accepted, + rejected or is a delta message, :code:`token` is the token used for tracing in this request. + + *srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout, + a timeout notification will be generated and put into the registered callback to notify users. + + **Returns** + + The token used for tracing in this shadow request. + + """ + # Validate JSON + self._basicJSONParserHandler.setString(srcJSONPayload) + if self._basicJSONParserHandler.validateJSON(): + with self._dataStructureLock: + # clientToken + currentToken = self._tokenHandler.getNextToken() + self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["update", currentToken]) + self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken) + JSONPayloadWithToken = self._basicJSONParserHandler.regenerateString() + # Update callback data structure + self._shadowSubscribeCallbackTable["update"] = srcCallback + # Update number of pending feedback + self._shadowSubscribeStatusTable["update"] += 1 + # Two subscriptions + if not self._isPersistentSubscribe or not self._isUpdateSubscribed: + self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "update", self.generalCallback) + self._isUpdateSubscribed = True + self._logger.info("Subscribed to update accepted/rejected topics for deviceShadow: " + self._shadowName) + # One publish + self._shadowManagerHandler.basicShadowPublish(self._shadowName, "update", JSONPayloadWithToken) + # Start the timer + self._tokenPool[currentToken].start() + else: + raise ValueError("Invalid JSON file.") + return currentToken + + def shadowRegisterDeltaCallback(self, srcCallback): + """ + **Description** + + Listen on delta topics for this device shadow by subscribing to delta topics. Whenever there + is a difference between the desired and reported state, the registered callback will be called + and the delta payload will be available in the callback. + + **Syntax** + + .. code:: python + + # Listen on delta topics for BotShadow + BotShadow.shadowRegisterDeltaCallback(customCallback) + + **Parameters** + + *srcCallback* - Function to be called when the response for this shadow request comes back. Should + be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the + JSON document returned, :code:`responseStatus` indicates whether the request has been accepted, + rejected or is a delta message, :code:`token` is the token used for tracing in this request. + + **Returns** + + None + + """ + with self._dataStructureLock: + # Update callback data structure + self._shadowSubscribeCallbackTable["delta"] = srcCallback + # One subscription + self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "delta", self.generalCallback) + self._logger.info("Subscribed to delta topic for deviceShadow: " + self._shadowName) + + def shadowUnregisterDeltaCallback(self): + """ + **Description** + + Cancel listening on delta topics for this device shadow by unsubscribing to delta topics. There will + be no delta messages received after this API call even though there is a difference between the + desired and reported state. + + **Syntax** + + .. code:: python + + # Cancel listening on delta topics for BotShadow + BotShadow.shadowUnregisterDeltaCallback() + + **Parameters** + + None + + **Returns** + + None + + """ + with self._dataStructureLock: + # Update callback data structure + del self._shadowSubscribeCallbackTable["delta"] + # One unsubscription + self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, "delta") + self._logger.info("Unsubscribed to delta topics for deviceShadow: " + self._shadowName) diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/shadow/shadowManager.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/shadow/shadowManager.py new file mode 100644 index 0000000..3dafa74 --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/shadow/shadowManager.py @@ -0,0 +1,83 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import logging +import time +from threading import Lock + +class _shadowAction: + _actionType = ["get", "update", "delete", "delta"] + + def __init__(self, srcShadowName, srcActionName): + if srcActionName is None or srcActionName not in self._actionType: + raise TypeError("Unsupported shadow action.") + self._shadowName = srcShadowName + self._actionName = srcActionName + self.isDelta = srcActionName == "delta" + if self.isDelta: + self._topicDelta = "$aws/things/" + str(self._shadowName) + "/shadow/update/delta" + else: + self._topicGeneral = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) + self._topicAccept = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) + "/accepted" + self._topicReject = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) + "/rejected" + + def getTopicGeneral(self): + return self._topicGeneral + + def getTopicAccept(self): + return self._topicAccept + + def getTopicReject(self): + return self._topicReject + + def getTopicDelta(self): + return self._topicDelta + + +class shadowManager: + + _logger = logging.getLogger(__name__) + + def __init__(self, srcMQTTCore): + # Load in mqttCore + if srcMQTTCore is None: + raise TypeError("None type inputs detected.") + self._mqttCoreHandler = srcMQTTCore + self._shadowSubUnsubOperationLock = Lock() + + def basicShadowPublish(self, srcShadowName, srcShadowAction, srcPayload): + currentShadowAction = _shadowAction(srcShadowName, srcShadowAction) + self._mqttCoreHandler.publish(currentShadowAction.getTopicGeneral(), srcPayload, 0, False) + + def basicShadowSubscribe(self, srcShadowName, srcShadowAction, srcCallback): + with self._shadowSubUnsubOperationLock: + currentShadowAction = _shadowAction(srcShadowName, srcShadowAction) + if currentShadowAction.isDelta: + self._mqttCoreHandler.subscribe(currentShadowAction.getTopicDelta(), 0, srcCallback) + else: + self._mqttCoreHandler.subscribe(currentShadowAction.getTopicAccept(), 0, srcCallback) + self._mqttCoreHandler.subscribe(currentShadowAction.getTopicReject(), 0, srcCallback) + time.sleep(2) + + def basicShadowUnsubscribe(self, srcShadowName, srcShadowAction): + with self._shadowSubUnsubOperationLock: + currentShadowAction = _shadowAction(srcShadowName, srcShadowAction) + if currentShadowAction.isDelta: + self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicDelta()) + else: + self._logger.debug(currentShadowAction.getTopicAccept()) + self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicAccept()) + self._logger.debug(currentShadowAction.getTopicReject()) + self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicReject()) diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/util/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/util/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/util/enums.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/util/enums.py new file mode 100644 index 0000000..3aa3d2f --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/util/enums.py @@ -0,0 +1,19 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +class DropBehaviorTypes(object): + DROP_OLDEST = 0 + DROP_NEWEST = 1 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/util/providers.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/util/providers.py new file mode 100644 index 0000000..d90789a --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/core/util/providers.py @@ -0,0 +1,92 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +class CredentialsProvider(object): + + def __init__(self): + self._ca_path = "" + + def set_ca_path(self, ca_path): + self._ca_path = ca_path + + def get_ca_path(self): + return self._ca_path + + +class CertificateCredentialsProvider(CredentialsProvider): + + def __init__(self): + CredentialsProvider.__init__(self) + self._cert_path = "" + self._key_path = "" + + def set_cert_path(self,cert_path): + self._cert_path = cert_path + + def set_key_path(self, key_path): + self._key_path = key_path + + def get_cert_path(self): + return self._cert_path + + def get_key_path(self): + return self._key_path + + +class IAMCredentialsProvider(CredentialsProvider): + + def __init__(self): + CredentialsProvider.__init__(self) + self._aws_access_key_id = "" + self._aws_secret_access_key = "" + self._aws_session_token = "" + + def set_access_key_id(self, access_key_id): + self._aws_access_key_id = access_key_id + + def set_secret_access_key(self, secret_access_key): + self._aws_secret_access_key = secret_access_key + + def set_session_token(self, session_token): + self._aws_session_token = session_token + + def get_access_key_id(self): + return self._aws_access_key_id + + def get_secret_access_key(self): + return self._aws_secret_access_key + + def get_session_token(self): + return self._aws_session_token + + +class EndpointProvider(object): + + def __init__(self): + self._host = "" + self._port = -1 + + def set_host(self, host): + self._host = host + + def set_port(self, port): + self._port = port + + def get_host(self): + return self._host + + def get_port(self): + return self._port diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/AWSIoTExceptions.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/AWSIoTExceptions.py new file mode 100644 index 0000000..0de5401 --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/AWSIoTExceptions.py @@ -0,0 +1,153 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import AWSIoTPythonSDK.exception.operationTimeoutException as operationTimeoutException +import AWSIoTPythonSDK.exception.operationError as operationError + + +# Serial Exception +class acceptTimeoutException(Exception): + def __init__(self, msg="Accept Timeout"): + self.message = msg + + +# MQTT Operation Timeout Exception +class connectTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, msg="Connect Timeout"): + self.message = msg + + +class disconnectTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, msg="Disconnect Timeout"): + self.message = msg + + +class publishTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, msg="Publish Timeout"): + self.message = msg + + +class subscribeTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, msg="Subscribe Timeout"): + self.message = msg + + +class unsubscribeTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, msg="Unsubscribe Timeout"): + self.message = msg + + +# MQTT Operation Error +class connectError(operationError.operationError): + def __init__(self, errorCode): + self.message = "Connect Error: " + str(errorCode) + + +class disconnectError(operationError.operationError): + def __init__(self, errorCode): + self.message = "Disconnect Error: " + str(errorCode) + + +class publishError(operationError.operationError): + def __init__(self, errorCode): + self.message = "Publish Error: " + str(errorCode) + + +class publishQueueFullException(operationError.operationError): + def __init__(self): + self.message = "Internal Publish Queue Full" + + +class publishQueueDisabledException(operationError.operationError): + def __init__(self): + self.message = "Offline publish request dropped because queueing is disabled" + + +class subscribeError(operationError.operationError): + def __init__(self, errorCode): + self.message = "Subscribe Error: " + str(errorCode) + + +class subscribeQueueFullException(operationError.operationError): + def __init__(self): + self.message = "Internal Subscribe Queue Full" + + +class subscribeQueueDisabledException(operationError.operationError): + def __init__(self): + self.message = "Offline subscribe request dropped because queueing is disabled" + + +class unsubscribeError(operationError.operationError): + def __init__(self, errorCode): + self.message = "Unsubscribe Error: " + str(errorCode) + + +class unsubscribeQueueFullException(operationError.operationError): + def __init__(self): + self.message = "Internal Unsubscribe Queue Full" + + +class unsubscribeQueueDisabledException(operationError.operationError): + def __init__(self): + self.message = "Offline unsubscribe request dropped because queueing is disabled" + + +# Websocket Error +class wssNoKeyInEnvironmentError(operationError.operationError): + def __init__(self): + self.message = "No AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY detected in $ENV." + + +class wssHandShakeError(operationError.operationError): + def __init__(self): + self.message = "Error in WSS handshake." + + +# Greengrass Discovery Error +class DiscoveryDataNotFoundException(operationError.operationError): + def __init__(self): + self.message = "No discovery data found" + + +class DiscoveryTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, message="Discovery request timed out"): + self.message = message + + +class DiscoveryInvalidRequestException(operationError.operationError): + def __init__(self): + self.message = "Invalid discovery request" + + +class DiscoveryUnauthorizedException(operationError.operationError): + def __init__(self): + self.message = "Discovery request not authorized" + + +class DiscoveryThrottlingException(operationError.operationError): + def __init__(self): + self.message = "Too many discovery requests" + + +class DiscoveryFailure(operationError.operationError): + def __init__(self, message): + self.message = message + + +# Client Error +class ClientError(Exception): + def __init__(self, message): + self.message = message diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/__init__.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/operationError.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/operationError.py new file mode 100644 index 0000000..1c86dfc --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/operationError.py @@ -0,0 +1,19 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +class operationError(Exception): + def __init__(self, msg="Operation Error"): + self.message = msg diff --git a/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/operationTimeoutException.py b/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/operationTimeoutException.py new file mode 100644 index 0000000..737154e --- /dev/null +++ b/aws-iot-device-sdk-python/AWSIoTPythonSDK/exception/operationTimeoutException.py @@ -0,0 +1,19 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +class operationTimeoutException(Exception): + def __init__(self, msg="Operation Timeout"): + self.message = msg diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/MQTTLib.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/MQTTLib.py new file mode 100644 index 0000000..2a2527a --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/MQTTLib.py @@ -0,0 +1,1779 @@ +# +#/* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +from AWSIoTPythonSDK.core.util.providers import CertificateCredentialsProvider +from AWSIoTPythonSDK.core.util.providers import IAMCredentialsProvider +from AWSIoTPythonSDK.core.util.providers import EndpointProvider +from AWSIoTPythonSDK.core.jobs.thingJobManager import jobExecutionTopicType +from AWSIoTPythonSDK.core.jobs.thingJobManager import jobExecutionTopicReplyType +from AWSIoTPythonSDK.core.protocol.mqtt_core import MqttCore +import AWSIoTPythonSDK.core.shadow.shadowManager as shadowManager +import AWSIoTPythonSDK.core.shadow.deviceShadow as deviceShadow +import AWSIoTPythonSDK.core.jobs.thingJobManager as thingJobManager + +# Constants +# - Protocol types: +MQTTv3_1 = 3 +MQTTv3_1_1 = 4 + +DROP_OLDEST = 0 +DROP_NEWEST = 1 + +class AWSIoTMQTTClient: + + def __init__(self, clientID, protocolType=MQTTv3_1_1, useWebsocket=False, cleanSession=True): + """ + + The client class that connects to and accesses AWS IoT over MQTT v3.1/3.1.1. + + The following connection types are available: + + - TLSv1.2 Mutual Authentication + + X.509 certificate-based secured MQTT connection to AWS IoT + + - Websocket SigV4 + + IAM credential-based secured MQTT connection over Websocket to AWS IoT + + It provides basic synchronous MQTT operations in the classic MQTT publish-subscribe + model, along with configurations of on-top features: + + - Auto reconnect/resubscribe + + - Progressive reconnect backoff + + - Offline publish requests queueing with draining + + **Syntax** + + .. code:: python + + import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT + + # Create an AWS IoT MQTT Client using TLSv1.2 Mutual Authentication + myAWSIoTMQTTClient = AWSIoTPyMQTT.AWSIoTMQTTClient("testIoTPySDK") + # Create an AWS IoT MQTT Client using Websocket SigV4 + myAWSIoTMQTTClient = AWSIoTPyMQTT.AWSIoTMQTTClient("testIoTPySDK", useWebsocket=True) + + **Parameters** + + *clientID* - String that denotes the client identifier used to connect to AWS IoT. + If empty string were provided, client id for this connection will be randomly generated + n server side. + + *protocolType* - MQTT version in use for this connection. Could be :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1` or :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1_1` + + *useWebsocket* - Boolean that denotes enabling MQTT over Websocket SigV4 or not. + + **Returns** + + :code:`AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTClient` object + + """ + self._mqtt_core = MqttCore(clientID, cleanSession, protocolType, useWebsocket) + + # Configuration APIs + def configureLastWill(self, topic, payload, QoS, retain=False): + """ + **Description** + + Used to configure the last will topic, payload and QoS of the client. Should be called before connect. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.configureLastWill("last/Will/Topic", "lastWillPayload", 0) + + **Parameters** + + *topic* - Topic name that last will publishes to. + + *payload* - Payload to publish for last will. + + *QoS* - Quality of Service. Could be 0 or 1. + + **Returns** + + None + + """ + self._mqtt_core.configure_last_will(topic, payload, QoS, retain) + + def clearLastWill(self): + """ + **Description** + + Used to clear the last will configuration that is previously set through configureLastWill. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.clearLastWill() + + **Parameter** + + None + + **Returns** + + None + + """ + self._mqtt_core.clear_last_will() + + def configureEndpoint(self, hostName, portNumber): + """ + **Description** + + Used to configure the host name and port number the client tries to connect to. Should be called + before connect. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.configureEndpoint("random.iot.region.amazonaws.com", 8883) + + **Parameters** + + *hostName* - String that denotes the host name of the user-specific AWS IoT endpoint. + + *portNumber* - Integer that denotes the port number to connect to. Could be :code:`8883` for + TLSv1.2 Mutual Authentication or :code:`443` for Websocket SigV4 and TLSv1.2 Mutual Authentication + with ALPN extension. + + **Returns** + + None + + """ + endpoint_provider = EndpointProvider() + endpoint_provider.set_host(hostName) + endpoint_provider.set_port(portNumber) + self._mqtt_core.configure_endpoint(endpoint_provider) + if portNumber == 443 and not self._mqtt_core.use_wss(): + self._mqtt_core.configure_alpn_protocols() + + def configureIAMCredentials(self, AWSAccessKeyID, AWSSecretAccessKey, AWSSessionToken=""): + """ + **Description** + + Used to configure/update the custom IAM credentials for Websocket SigV4 connection to + AWS IoT. Should be called before connect. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.configureIAMCredentials(obtainedAccessKeyID, obtainedSecretAccessKey, obtainedSessionToken) + + .. note:: + + Hard-coding credentials into custom script is NOT recommended. Please use AWS Cognito identity service + or other credential provider. + + **Parameters** + + *AWSAccessKeyID* - AWS Access Key Id from user-specific IAM credentials. + + *AWSSecretAccessKey* - AWS Secret Access Key from user-specific IAM credentials. + + *AWSSessionToken* - AWS Session Token for temporary authentication from STS. + + **Returns** + + None + + """ + iam_credentials_provider = IAMCredentialsProvider() + iam_credentials_provider.set_access_key_id(AWSAccessKeyID) + iam_credentials_provider.set_secret_access_key(AWSSecretAccessKey) + iam_credentials_provider.set_session_token(AWSSessionToken) + self._mqtt_core.configure_iam_credentials(iam_credentials_provider) + + def configureCredentials(self, CAFilePath, KeyPath="", CertificatePath=""): # Should be good for MutualAuth certs config and Websocket rootCA config + """ + **Description** + + Used to configure the rootCA, private key and certificate files. Should be called before connect. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.configureCredentials("PATH/TO/ROOT_CA", "PATH/TO/PRIVATE_KEY", "PATH/TO/CERTIFICATE") + + **Parameters** + + *CAFilePath* - Path to read the root CA file. Required for all connection types. + + *KeyPath* - Path to read the private key. Required for X.509 certificate based connection. + + *CertificatePath* - Path to read the certificate. Required for X.509 certificate based connection. + + **Returns** + + None + + """ + cert_credentials_provider = CertificateCredentialsProvider() + cert_credentials_provider.set_ca_path(CAFilePath) + cert_credentials_provider.set_key_path(KeyPath) + cert_credentials_provider.set_cert_path(CertificatePath) + self._mqtt_core.configure_cert_credentials(cert_credentials_provider) + + def configureAutoReconnectBackoffTime(self, baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond): + """ + **Description** + + Used to configure the auto-reconnect backoff timing. Should be called before connect. + + **Syntax** + + .. code:: python + + # Configure the auto-reconnect backoff to start with 1 second and use 128 seconds as a maximum back off time. + # Connection over 20 seconds is considered stable and will reset the back off time back to its base. + myAWSIoTMQTTClient.configureAutoReconnectBackoffTime(1, 128, 20) + + **Parameters** + + *baseReconnectQuietTimeSecond* - The initial back off time to start with, in seconds. + Should be less than the stableConnectionTime. + + *maxReconnectQuietTimeSecond* - The maximum back off time, in seconds. + + *stableConnectionTimeSecond* - The number of seconds for a connection to last to be considered as stable. + Back off time will be reset to base once the connection is stable. + + **Returns** + + None + + """ + self._mqtt_core.configure_reconnect_back_off(baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond) + + def configureOfflinePublishQueueing(self, queueSize, dropBehavior=DROP_NEWEST): + """ + **Description** + + Used to configure the queue size and drop behavior for the offline requests queueing. Should be + called before connect. Queueable offline requests include publish, subscribe and unsubscribe. + + **Syntax** + + .. code:: python + + import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT + + # Configure the offline queue for publish requests to be 20 in size and drop the oldest + request when the queue is full. + myAWSIoTMQTTClient.configureOfflinePublishQueueing(20, AWSIoTPyMQTT.DROP_OLDEST) + + **Parameters** + + *queueSize* - Size of the queue for offline publish requests queueing. + If set to 0, the queue is disabled. If set to -1, the queue size is set to be infinite. + + *dropBehavior* - the type of drop behavior when the queue is full. + Could be :code:`AWSIoTPythonSDK.core.util.enums.DropBehaviorTypes.DROP_OLDEST` or + :code:`AWSIoTPythonSDK.core.util.enums.DropBehaviorTypes.DROP_NEWEST`. + + **Returns** + + None + + """ + self._mqtt_core.configure_offline_requests_queue(queueSize, dropBehavior) + + def configureDrainingFrequency(self, frequencyInHz): + """ + **Description** + + Used to configure the draining speed to clear up the queued requests when the connection is back. + Should be called before connect. + + **Syntax** + + .. code:: python + + # Configure the draining speed to be 2 requests/second + myAWSIoTMQTTClient.configureDrainingFrequency(2) + + .. note:: + + Make sure the draining speed is fast enough and faster than the publish rate. Slow draining + could result in inifinite draining process. + + **Parameters** + + *frequencyInHz* - The draining speed to clear the queued requests, in requests/second. + + **Returns** + + None + + """ + self._mqtt_core.configure_draining_interval_sec(1/float(frequencyInHz)) + + def configureConnectDisconnectTimeout(self, timeoutSecond): + """ + **Description** + + Used to configure the time in seconds to wait for a CONNACK or a disconnect to complete. + Should be called before connect. + + **Syntax** + + .. code:: python + + # Configure connect/disconnect timeout to be 10 seconds + myAWSIoTMQTTClient.configureConnectDisconnectTimeout(10) + + **Parameters** + + *timeoutSecond* - Time in seconds to wait for a CONNACK or a disconnect to complete. + + **Returns** + + None + + """ + self._mqtt_core.configure_connect_disconnect_timeout_sec(timeoutSecond) + + def configureMQTTOperationTimeout(self, timeoutSecond): + """ + **Description** + + Used to configure the timeout in seconds for MQTT QoS 1 publish, subscribe and unsubscribe. + Should be called before connect. + + **Syntax** + + .. code:: python + + # Configure MQTT operation timeout to be 5 seconds + myAWSIoTMQTTClient.configureMQTTOperationTimeout(5) + + **Parameters** + + *timeoutSecond* - Time in seconds to wait for a PUBACK/SUBACK/UNSUBACK. + + **Returns** + + None + + """ + self._mqtt_core.configure_operation_timeout_sec(timeoutSecond) + + def configureUsernamePassword(self, username, password=None): + """ + **Description** + + Used to configure the username and password used in CONNECT packet. + + **Syntax** + + .. code:: python + + # Configure user name and password + myAWSIoTMQTTClient.configureUsernamePassword("myUsername", "myPassword") + + **Parameters** + + *username* - Username used in the username field of CONNECT packet. + + *password* - Password used in the password field of CONNECT packet. + + **Returns** + + None + + """ + self._mqtt_core.configure_username_password(username, password) + + def configureSocketFactory(self, socket_factory): + """ + **Description** + + Configure a socket factory to custom configure a different socket type for + mqtt connection. Creating a custom socket allows for configuration of a proxy + + **Syntax** + + .. code:: python + + # Configure socket factory + custom_args = {"arg1": "val1", "arg2": "val2"} + socket_factory = lambda: custom.create_connection((host, port), **custom_args) + myAWSIoTMQTTClient.configureSocketFactory(socket_factory) + + **Parameters** + + *socket_factory* - Anonymous function which creates a custom socket to spec. + + **Returns** + + None + + """ + self._mqtt_core.configure_socket_factory(socket_factory) + + def enableMetricsCollection(self): + """ + **Description** + + Used to enable SDK metrics collection. Username field in CONNECT packet will be used to append the SDK name + and SDK version in use and communicate to AWS IoT cloud. This metrics collection is enabled by default. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.enableMetricsCollection() + + **Parameters** + + None + + **Returns** + + None + + """ + self._mqtt_core.enable_metrics_collection() + + def disableMetricsCollection(self): + """ + **Description** + + Used to disable SDK metrics collection. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.disableMetricsCollection() + + **Parameters** + + None + + **Returns** + + None + + """ + self._mqtt_core.disable_metrics_collection() + + # MQTT functionality APIs + def connect(self, keepAliveIntervalSecond=600): + """ + **Description** + + Connect to AWS IoT, with user-specific keepalive interval configuration. + + **Syntax** + + .. code:: python + + # Connect to AWS IoT with default keepalive set to 600 seconds + myAWSIoTMQTTClient.connect() + # Connect to AWS IoT with keepalive interval set to 1200 seconds + myAWSIoTMQTTClient.connect(1200) + + **Parameters** + + *keepAliveIntervalSecond* - Time in seconds for interval of sending MQTT ping request. + A shorter keep-alive interval allows the client to detect disconnects more quickly. + Default set to 600 seconds. + + **Returns** + + True if the connect attempt succeeded. False if failed. + + """ + self._load_callbacks() + return self._mqtt_core.connect(keepAliveIntervalSecond) + + def connectAsync(self, keepAliveIntervalSecond=600, ackCallback=None): + """ + **Description** + + Connect asynchronously to AWS IoT, with user-specific keepalive interval configuration and CONNACK callback. + + **Syntax** + + .. code:: python + + # Connect to AWS IoT with default keepalive set to 600 seconds and a custom CONNACK callback + myAWSIoTMQTTClient.connectAsync(ackCallback=my_connack_callback) + # Connect to AWS IoT with default keepalive set to 1200 seconds and a custom CONNACK callback + myAWSIoTMQTTClient.connectAsync(keepAliveInternvalSecond=1200, ackCallback=myConnackCallback) + + **Parameters** + + *keepAliveIntervalSecond* - Time in seconds for interval of sending MQTT ping request. + Default set to 600 seconds. + + *ackCallback* - Callback to be invoked when the client receives a CONNACK. Should be in form + :code:`customCallback(mid, data)`, where :code:`mid` is the packet id for the connect request + and :code:`data` is the connect result code. + + **Returns** + + Connect request packet id, for tracking purpose in the corresponding callback. + + """ + self._load_callbacks() + return self._mqtt_core.connect_async(keepAliveIntervalSecond, ackCallback) + + def _load_callbacks(self): + self._mqtt_core.on_online = self.onOnline + self._mqtt_core.on_offline = self.onOffline + self._mqtt_core.on_message = self.onMessage + + def disconnect(self): + """ + **Description** + + Disconnect from AWS IoT. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.disconnect() + + **Parameters** + + None + + **Returns** + + True if the disconnect attempt succeeded. False if failed. + + """ + return self._mqtt_core.disconnect() + + def disconnectAsync(self, ackCallback=None): + """ + **Description** + + Disconnect asynchronously to AWS IoT. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.disconnectAsync(ackCallback=myDisconnectCallback) + + **Parameters** + + *ackCallback* - Callback to be invoked when the client finishes sending disconnect and internal clean-up. + Should be in form :code:`customCallback(mid, data)`, where :code:`mid` is the packet id for the disconnect + request and :code:`data` is the disconnect result code. + + **Returns** + + Disconnect request packet id, for tracking purpose in the corresponding callback. + + """ + return self._mqtt_core.disconnect_async(ackCallback) + + def publish(self, topic, payload, QoS): + """ + **Description** + + Publish a new message to the desired topic with QoS. + + **Syntax** + + .. code:: python + + # Publish a QoS0 message "myPayload" to topic "myTopic" + myAWSIoTMQTTClient.publish("myTopic", "myPayload", 0) + # Publish a QoS1 message "myPayloadWithQos1" to topic "myTopic/sub" + myAWSIoTMQTTClient.publish("myTopic/sub", "myPayloadWithQos1", 1) + + **Parameters** + + *topic* - Topic name to publish to. + + *payload* - Payload to publish. + + *QoS* - Quality of Service. Could be 0 or 1. + + **Returns** + + True if the publish request has been sent to paho. False if the request did not reach paho. + + """ + return self._mqtt_core.publish(topic, payload, QoS, False) # Disable retain for publish by now + + def publishAsync(self, topic, payload, QoS, ackCallback=None): + """ + **Description** + + Publish a new message asynchronously to the desired topic with QoS and PUBACK callback. Note that the ack + callback configuration for a QoS0 publish request will be ignored as there are no PUBACK reception. + + **Syntax** + + .. code:: python + + # Publish a QoS0 message "myPayload" to topic "myTopic" + myAWSIoTMQTTClient.publishAsync("myTopic", "myPayload", 0) + # Publish a QoS1 message "myPayloadWithQos1" to topic "myTopic/sub", with custom PUBACK callback + myAWSIoTMQTTClient.publishAsync("myTopic/sub", "myPayloadWithQos1", 1, ackCallback=myPubackCallback) + + **Parameters** + + *topic* - Topic name to publish to. + + *payload* - Payload to publish. + + *QoS* - Quality of Service. Could be 0 or 1. + + *ackCallback* - Callback to be invoked when the client receives a PUBACK. Should be in form + :code:`customCallback(mid)`, where :code:`mid` is the packet id for the disconnect request. + + **Returns** + + Publish request packet id, for tracking purpose in the corresponding callback. + + """ + return self._mqtt_core.publish_async(topic, payload, QoS, False, ackCallback) + + def subscribe(self, topic, QoS, callback): + """ + **Description** + + Subscribe to the desired topic and register a callback. + + **Syntax** + + .. code:: python + + # Subscribe to "myTopic" with QoS0 and register a callback + myAWSIoTMQTTClient.subscribe("myTopic", 0, customCallback) + # Subscribe to "myTopic/#" with QoS1 and register a callback + myAWSIoTMQTTClient.subscribe("myTopic/#", 1, customCallback) + + **Parameters** + + *topic* - Topic name or filter to subscribe to. + + *QoS* - Quality of Service. Could be 0 or 1. + + *callback* - Function to be called when a new message for the subscribed topic + comes in. Should be in form :code:`customCallback(client, userdata, message)`, where + :code:`message` contains :code:`topic` and :code:`payload`. Note that :code:`client` and :code:`userdata` are + here just to be aligned with the underneath Paho callback function signature. These fields are pending to be + deprecated and should not be depended on. + + **Returns** + + True if the subscribe attempt succeeded. False if failed. + + """ + return self._mqtt_core.subscribe(topic, QoS, callback) + + def subscribeAsync(self, topic, QoS, ackCallback=None, messageCallback=None): + """ + **Description** + + Subscribe to the desired topic and register a message callback with SUBACK callback. + + **Syntax** + + .. code:: python + + # Subscribe to "myTopic" with QoS0, custom SUBACK callback and a message callback + myAWSIoTMQTTClient.subscribe("myTopic", 0, ackCallback=mySubackCallback, messageCallback=customMessageCallback) + # Subscribe to "myTopic/#" with QoS1, custom SUBACK callback and a message callback + myAWSIoTMQTTClient.subscribe("myTopic/#", 1, ackCallback=mySubackCallback, messageCallback=customMessageCallback) + + **Parameters** + + *topic* - Topic name or filter to subscribe to. + + *QoS* - Quality of Service. Could be 0 or 1. + + *ackCallback* - Callback to be invoked when the client receives a SUBACK. Should be in form + :code:`customCallback(mid, data)`, where :code:`mid` is the packet id for the disconnect request and + :code:`data` is the granted QoS for this subscription. + + *messageCallback* - Function to be called when a new message for the subscribed topic + comes in. Should be in form :code:`customCallback(client, userdata, message)`, where + :code:`message` contains :code:`topic` and :code:`payload`. Note that :code:`client` and :code:`userdata` are + here just to be aligned with the underneath Paho callback function signature. These fields are pending to be + deprecated and should not be depended on. + + **Returns** + + Subscribe request packet id, for tracking purpose in the corresponding callback. + + """ + return self._mqtt_core.subscribe_async(topic, QoS, ackCallback, messageCallback) + + def unsubscribe(self, topic): + """ + **Description** + + Unsubscribe to the desired topic. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.unsubscribe("myTopic") + + **Parameters** + + *topic* - Topic name or filter to unsubscribe to. + + **Returns** + + True if the unsubscribe attempt succeeded. False if failed. + + """ + return self._mqtt_core.unsubscribe(topic) + + def unsubscribeAsync(self, topic, ackCallback=None): + """ + **Description** + + Unsubscribe to the desired topic with UNSUBACK callback. + + **Syntax** + + .. code:: python + + myAWSIoTMQTTClient.unsubscribe("myTopic", ackCallback=myUnsubackCallback) + + **Parameters** + + *topic* - Topic name or filter to unsubscribe to. + + *ackCallback* - Callback to be invoked when the client receives a UNSUBACK. Should be in form + :code:`customCallback(mid)`, where :code:`mid` is the packet id for the disconnect request. + + **Returns** + + Unsubscribe request packet id, for tracking purpose in the corresponding callback. + + """ + return self._mqtt_core.unsubscribe_async(topic, ackCallback) + + def onOnline(self): + """ + **Description** + + Callback that gets called when the client is online. The callback registration should happen before calling + connect/connectAsync. + + **Syntax** + + .. code:: python + + # Register an onOnline callback + myAWSIoTMQTTClient.onOnline = myOnOnlineCallback + + **Parameters** + + None + + **Returns** + + None + + """ + pass + + def onOffline(self): + """ + **Description** + + Callback that gets called when the client is offline. The callback registration should happen before calling + connect/connectAsync. + + **Syntax** + + .. code:: python + + # Register an onOffline callback + myAWSIoTMQTTClient.onOffline = myOnOfflineCallback + + **Parameters** + + None + + **Returns** + + None + + """ + pass + + def onMessage(self, message): + """ + **Description** + + Callback that gets called when the client receives a new message. The callback registration should happen before + calling connect/connectAsync. This callback, if present, will always be triggered regardless of whether there is + any message callback registered upon subscribe API call. It is for the purpose to aggregating the processing of + received messages in one function. + + **Syntax** + + .. code:: python + + # Register an onMessage callback + myAWSIoTMQTTClient.onMessage = myOnMessageCallback + + **Parameters** + + *message* - Received MQTT message. It contains the source topic as :code:`message.topic`, and the payload as + :code:`message.payload`. + + **Returns** + + None + + """ + pass + +class _AWSIoTMQTTDelegatingClient(object): + + def __init__(self, clientID, protocolType=MQTTv3_1_1, useWebsocket=False, cleanSession=True, awsIoTMQTTClient=None): + """ + + This class is used internally by the SDK and should not be instantiated directly. + + It delegates to a provided AWS IoT MQTT Client or creates a new one given the configuration + parameters and exposes core operations for subclasses provide convenience methods + + **Syntax** + + None + + **Parameters** + + *clientID* - String that denotes the client identifier used to connect to AWS IoT. + If empty string were provided, client id for this connection will be randomly generated + n server side. + + *protocolType* - MQTT version in use for this connection. Could be :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1` or :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1_1` + + *useWebsocket* - Boolean that denotes enabling MQTT over Websocket SigV4 or not. + + **Returns** + + AWSIoTPythonSDK.MQTTLib._AWSIoTMQTTDelegatingClient object + + """ + # AWSIOTMQTTClient instance + self._AWSIoTMQTTClient = awsIoTMQTTClient if awsIoTMQTTClient is not None else AWSIoTMQTTClient(clientID, protocolType, useWebsocket, cleanSession) + + # Configuration APIs + def configureLastWill(self, topic, payload, QoS): + """ + **Description** + + Used to configure the last will topic, payload and QoS of the client. Should be called before connect. This is a public + facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.configureLastWill("last/Will/Topic", "lastWillPayload", 0) + myJobsClient.configureLastWill("last/Will/Topic", "lastWillPayload", 0) + + **Parameters** + + *topic* - Topic name that last will publishes to. + + *payload* - Payload to publish for last will. + + *QoS* - Quality of Service. Could be 0 or 1. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureLastWill(srcTopic, srcPayload, srcQos) + self._AWSIoTMQTTClient.configureLastWill(topic, payload, QoS) + + def clearLastWill(self): + """ + **Description** + + Used to clear the last will configuration that is previously set through configureLastWill. This is a public + facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.clearLastWill() + myJobsClient.clearLastWill() + + **Parameter** + + None + + **Returns** + + None + + """ + # AWSIoTMQTTClient.clearLastWill() + self._AWSIoTMQTTClient.clearLastWill() + + def configureEndpoint(self, hostName, portNumber): + """ + **Description** + + Used to configure the host name and port number the underneath AWS IoT MQTT Client tries to connect to. Should be called + before connect. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.clearLastWill("random.iot.region.amazonaws.com", 8883) + myJobsClient.clearLastWill("random.iot.region.amazonaws.com", 8883) + + **Parameters** + + *hostName* - String that denotes the host name of the user-specific AWS IoT endpoint. + + *portNumber* - Integer that denotes the port number to connect to. Could be :code:`8883` for + TLSv1.2 Mutual Authentication or :code:`443` for Websocket SigV4 and TLSv1.2 Mutual Authentication + with ALPN extension. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureEndpoint + self._AWSIoTMQTTClient.configureEndpoint(hostName, portNumber) + + def configureIAMCredentials(self, AWSAccessKeyID, AWSSecretAccessKey, AWSSTSToken=""): + """ + **Description** + + Used to configure/update the custom IAM credentials for the underneath AWS IoT MQTT Client + for Websocket SigV4 connection to AWS IoT. Should be called before connect. This is a public + facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.clearLastWill(obtainedAccessKeyID, obtainedSecretAccessKey, obtainedSessionToken) + myJobsClient.clearLastWill(obtainedAccessKeyID, obtainedSecretAccessKey, obtainedSessionToken) + + .. note:: + + Hard-coding credentials into custom script is NOT recommended. Please use AWS Cognito identity service + or other credential provider. + + **Parameters** + + *AWSAccessKeyID* - AWS Access Key Id from user-specific IAM credentials. + + *AWSSecretAccessKey* - AWS Secret Access Key from user-specific IAM credentials. + + *AWSSessionToken* - AWS Session Token for temporary authentication from STS. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureIAMCredentials + self._AWSIoTMQTTClient.configureIAMCredentials(AWSAccessKeyID, AWSSecretAccessKey, AWSSTSToken) + + def configureCredentials(self, CAFilePath, KeyPath="", CertificatePath=""): # Should be good for MutualAuth and Websocket + """ + **Description** + + Used to configure the rootCA, private key and certificate files. Should be called before connect. This is a public + facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.clearLastWill("PATH/TO/ROOT_CA", "PATH/TO/PRIVATE_KEY", "PATH/TO/CERTIFICATE") + myJobsClient.clearLastWill("PATH/TO/ROOT_CA", "PATH/TO/PRIVATE_KEY", "PATH/TO/CERTIFICATE") + + **Parameters** + + *CAFilePath* - Path to read the root CA file. Required for all connection types. + + *KeyPath* - Path to read the private key. Required for X.509 certificate based connection. + + *CertificatePath* - Path to read the certificate. Required for X.509 certificate based connection. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureCredentials + self._AWSIoTMQTTClient.configureCredentials(CAFilePath, KeyPath, CertificatePath) + + def configureAutoReconnectBackoffTime(self, baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond): + """ + **Description** + + Used to configure the auto-reconnect backoff timing. Should be called before connect. This is a public + facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Configure the auto-reconnect backoff to start with 1 second and use 128 seconds as a maximum back off time. + # Connection over 20 seconds is considered stable and will reset the back off time back to its base. + myShadowClient.clearLastWill(1, 128, 20) + myJobsClient.clearLastWill(1, 128, 20) + + **Parameters** + + *baseReconnectQuietTimeSecond* - The initial back off time to start with, in seconds. + Should be less than the stableConnectionTime. + + *maxReconnectQuietTimeSecond* - The maximum back off time, in seconds. + + *stableConnectionTimeSecond* - The number of seconds for a connection to last to be considered as stable. + Back off time will be reset to base once the connection is stable. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureBackoffTime + self._AWSIoTMQTTClient.configureAutoReconnectBackoffTime(baseReconnectQuietTimeSecond, maxReconnectQuietTimeSecond, stableConnectionTimeSecond) + + def configureConnectDisconnectTimeout(self, timeoutSecond): + """ + **Description** + + Used to configure the time in seconds to wait for a CONNACK or a disconnect to complete. + Should be called before connect. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Configure connect/disconnect timeout to be 10 seconds + myShadowClient.configureConnectDisconnectTimeout(10) + myJobsClient.configureConnectDisconnectTimeout(10) + + **Parameters** + + *timeoutSecond* - Time in seconds to wait for a CONNACK or a disconnect to complete. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureConnectDisconnectTimeout + self._AWSIoTMQTTClient.configureConnectDisconnectTimeout(timeoutSecond) + + def configureMQTTOperationTimeout(self, timeoutSecond): + """ + **Description** + + Used to configure the timeout in seconds for MQTT QoS 1 publish, subscribe and unsubscribe. + Should be called before connect. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Configure MQTT operation timeout to be 5 seconds + myShadowClient.configureMQTTOperationTimeout(5) + myJobsClient.configureMQTTOperationTimeout(5) + + **Parameters** + + *timeoutSecond* - Time in seconds to wait for a PUBACK/SUBACK/UNSUBACK. + + **Returns** + + None + + """ + # AWSIoTMQTTClient.configureMQTTOperationTimeout + self._AWSIoTMQTTClient.configureMQTTOperationTimeout(timeoutSecond) + + def configureUsernamePassword(self, username, password=None): + """ + **Description** + + Used to configure the username and password used in CONNECT packet. This is a public facing API + inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Configure user name and password + myShadowClient.configureUsernamePassword("myUsername", "myPassword") + myJobsClient.configureUsernamePassword("myUsername", "myPassword") + + **Parameters** + + *username* - Username used in the username field of CONNECT packet. + + *password* - Password used in the password field of CONNECT packet. + + **Returns** + + None + + """ + self._AWSIoTMQTTClient.configureUsernamePassword(username, password) + + def configureSocketFactory(self, socket_factory): + """ + **Description** + + Configure a socket factory to custom configure a different socket type for + mqtt connection. Creating a custom socket allows for configuration of a proxy + + **Syntax** + + .. code:: python + + # Configure socket factory + custom_args = {"arg1": "val1", "arg2": "val2"} + socket_factory = lambda: custom.create_connection((host, port), **custom_args) + myAWSIoTMQTTClient.configureSocketFactory(socket_factory) + + **Parameters** + + *socket_factory* - Anonymous function which creates a custom socket to spec. + + **Returns** + + None + + """ + self._AWSIoTMQTTClient.configureSocketFactory(socket_factory) + + def enableMetricsCollection(self): + """ + **Description** + + Used to enable SDK metrics collection. Username field in CONNECT packet will be used to append the SDK name + and SDK version in use and communicate to AWS IoT cloud. This metrics collection is enabled by default. + This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.enableMetricsCollection() + myJobsClient.enableMetricsCollection() + + **Parameters** + + None + + **Returns** + + None + + """ + self._AWSIoTMQTTClient.enableMetricsCollection() + + def disableMetricsCollection(self): + """ + **Description** + + Used to disable SDK metrics collection. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.disableMetricsCollection() + myJobsClient.disableMetricsCollection() + + **Parameters** + + None + + **Returns** + + None + + """ + self._AWSIoTMQTTClient.disableMetricsCollection() + + # Start the MQTT connection + def connect(self, keepAliveIntervalSecond=600): + """ + **Description** + + Connect to AWS IoT, with user-specific keepalive interval configuration. This is a public facing API inherited + by application level public clients. + + **Syntax** + + .. code:: python + + # Connect to AWS IoT with default keepalive set to 600 seconds + myShadowClient.connect() + myJobsClient.connect() + # Connect to AWS IoT with keepalive interval set to 1200 seconds + myShadowClient.connect(1200) + myJobsClient.connect(1200) + + **Parameters** + + *keepAliveIntervalSecond* - Time in seconds for interval of sending MQTT ping request. + Default set to 30 seconds. + + **Returns** + + True if the connect attempt succeeded. False if failed. + + """ + self._load_callbacks() + return self._AWSIoTMQTTClient.connect(keepAliveIntervalSecond) + + def _load_callbacks(self): + self._AWSIoTMQTTClient.onOnline = self.onOnline + self._AWSIoTMQTTClient.onOffline = self.onOffline + + # End the MQTT connection + def disconnect(self): + """ + **Description** + + Disconnect from AWS IoT. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + myShadowClient.disconnect() + myJobsClient.disconnect() + + **Parameters** + + None + + **Returns** + + True if the disconnect attempt succeeded. False if failed. + + """ + return self._AWSIoTMQTTClient.disconnect() + + # MQTT connection management API + def getMQTTConnection(self): + """ + **Description** + + Retrieve the AWS IoT MQTT Client used underneath, making it possible to perform + plain MQTT operations along with specialized operations using the same single connection. + This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Retrieve the AWS IoT MQTT Client used in the AWS IoT MQTT Delegating Client + thisAWSIoTMQTTClient = myShadowClient.getMQTTConnection() + thisAWSIoTMQTTClient = myJobsClient.getMQTTConnection() + # Perform plain MQTT operations using the same connection + thisAWSIoTMQTTClient.publish("Topic", "Payload", 1) + ... + + **Parameters** + + None + + **Returns** + + AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTClient object + + """ + # Return the internal AWSIoTMQTTClient instance + return self._AWSIoTMQTTClient + + def onOnline(self): + """ + **Description** + + Callback that gets called when the client is online. The callback registration should happen before calling + connect. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Register an onOnline callback + myShadowClient.onOnline = myOnOnlineCallback + myJobsClient.onOnline = myOnOnlineCallback + + **Parameters** + + None + + **Returns** + + None + + """ + pass + + def onOffline(self): + """ + **Description** + + Callback that gets called when the client is offline. The callback registration should happen before calling + connect. This is a public facing API inherited by application level public clients. + + **Syntax** + + .. code:: python + + # Register an onOffline callback + myShadowClient.onOffline = myOnOfflineCallback + myJobsClient.onOffline = myOnOfflineCallback + + **Parameters** + + None + + **Returns** + + None + + """ + pass + + +class AWSIoTMQTTShadowClient(_AWSIoTMQTTDelegatingClient): + + def __init__(self, clientID, protocolType=MQTTv3_1_1, useWebsocket=False, cleanSession=True, awsIoTMQTTClient=None): + """ + + The client class that manages device shadow and accesses its functionality in AWS IoT over MQTT v3.1/3.1.1. + + It delegates to the AWS IoT MQTT Client and exposes devive shadow related operations. + It shares the same connection types, synchronous MQTT operations and partial on-top features + with the AWS IoT MQTT Client: + + - Auto reconnect/resubscribe + + Same as AWS IoT MQTT Client. + + - Progressive reconnect backoff + + Same as AWS IoT MQTT Client. + + - Offline publish requests queueing with draining + + Disabled by default. Queueing is not allowed for time-sensitive shadow requests/messages. + + **Syntax** + + .. code:: python + + import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT + + # Create an AWS IoT MQTT Shadow Client using TLSv1.2 Mutual Authentication + myAWSIoTMQTTShadowClient = AWSIoTPyMQTT.AWSIoTMQTTShadowClient("testIoTPySDK") + # Create an AWS IoT MQTT Shadow Client using Websocket SigV4 + myAWSIoTMQTTShadowClient = AWSIoTPyMQTT.AWSIoTMQTTShadowClient("testIoTPySDK", useWebsocket=True) + + **Parameters** + + *clientID* - String that denotes the client identifier used to connect to AWS IoT. + If empty string were provided, client id for this connection will be randomly generated + n server side. + + *protocolType* - MQTT version in use for this connection. Could be :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1` or :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1_1` + + *useWebsocket* - Boolean that denotes enabling MQTT over Websocket SigV4 or not. + + **Returns** + + AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTShadowClient object + + """ + super(AWSIoTMQTTShadowClient, self).__init__(clientID, protocolType, useWebsocket, cleanSession, awsIoTMQTTClient) + #leave passed in clients alone + if awsIoTMQTTClient is None: + # Configure it to disable offline Publish Queueing + self._AWSIoTMQTTClient.configureOfflinePublishQueueing(0) # Disable queueing, no queueing for time-sensitive shadow messages + self._AWSIoTMQTTClient.configureDrainingFrequency(10) + # Now retrieve the configured mqttCore and init a shadowManager instance + self._shadowManager = shadowManager.shadowManager(self._AWSIoTMQTTClient._mqtt_core) + + # Shadow management API + def createShadowHandlerWithName(self, shadowName, isPersistentSubscribe): + """ + **Description** + + Create a device shadow handler using the specified shadow name and isPersistentSubscribe. + + **Syntax** + + .. code:: python + + # Create a device shadow handler for shadow named "Bot1", using persistent subscription + Bot1Shadow = myAWSIoTMQTTShadowClient.createShadowHandlerWithName("Bot1", True) + # Create a device shadow handler for shadow named "Bot2", using non-persistent subscription + Bot2Shadow = myAWSIoTMQTTShadowClient.createShadowHandlerWithName("Bot2", False) + + **Parameters** + + *shadowName* - Name of the device shadow. + + *isPersistentSubscribe* - Whether to unsubscribe from shadow response (accepted/rejected) topics + when there is a response. Will subscribe at the first time the shadow request is made and will + not unsubscribe if isPersistentSubscribe is set. + + **Returns** + + AWSIoTPythonSDK.core.shadow.deviceShadow.deviceShadow object, which exposes the device shadow interface. + + """ + # Create and return a deviceShadow instance + return deviceShadow.deviceShadow(shadowName, isPersistentSubscribe, self._shadowManager) + # Shadow APIs are accessible in deviceShadow instance": + ### + # deviceShadow.shadowGet + # deviceShadow.shadowUpdate + # deviceShadow.shadowDelete + # deviceShadow.shadowRegisterDelta + # deviceShadow.shadowUnregisterDelta + +class AWSIoTMQTTThingJobsClient(_AWSIoTMQTTDelegatingClient): + + def __init__(self, clientID, thingName, QoS=0, protocolType=MQTTv3_1_1, useWebsocket=False, cleanSession=True, awsIoTMQTTClient=None): + """ + + The client class that specializes in handling jobs messages and accesses its functionality in AWS IoT over MQTT v3.1/3.1.1. + + It delegates to the AWS IoT MQTT Client and exposes jobs related operations. + It shares the same connection types, synchronous MQTT operations and partial on-top features + with the AWS IoT MQTT Client: + + - Auto reconnect/resubscribe + + Same as AWS IoT MQTT Client. + + - Progressive reconnect backoff + + Same as AWS IoT MQTT Client. + + - Offline publish requests queueing with draining + + Same as AWS IoT MQTT Client + + **Syntax** + + .. code:: python + + import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT + + # Create an AWS IoT MQTT Jobs Client using TLSv1.2 Mutual Authentication + myAWSIoTMQTTJobsClient = AWSIoTPyMQTT.AWSIoTMQTTThingJobsClient("testIoTPySDK") + # Create an AWS IoT MQTT Jobs Client using Websocket SigV4 + myAWSIoTMQTTJobsClient = AWSIoTPyMQTT.AWSIoTMQTTThingJobsClient("testIoTPySDK", useWebsocket=True) + + **Parameters** + + *clientID* - String that denotes the client identifier and client token for jobs requests + If empty string is provided, client id for this connection will be randomly generated + on server side. If an awsIotMQTTClient is specified, this will not override the client ID + for the existing MQTT connection and only impact the client token for jobs request payloads + + *thingName* - String that represents the thingName used to send requests to proper topics and subscribe + to proper topics. + + *QoS* - QoS used for all requests sent through this client + + *awsIoTMQTTClient* - An instance of AWSIoTMQTTClient to use if not None. If not None, clientID, protocolType, useWebSocket, + and cleanSession parameters are not used. Caller is expected to invoke connect() prior to calling the pub/sub methods on this client. + + *protocolType* - MQTT version in use for this connection. Could be :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1` or :code:`AWSIoTPythonSDK.MQTTLib.MQTTv3_1_1` + + *useWebsocket* - Boolean that denotes enabling MQTT over Websocket SigV4 or not. + + **Returns** + + AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTJobsClient object + + """ + # AWSIOTMQTTClient instance + super(AWSIoTMQTTThingJobsClient, self).__init__(clientID, protocolType, useWebsocket, cleanSession, awsIoTMQTTClient) + self._thingJobManager = thingJobManager.thingJobManager(thingName, clientID) + self._QoS = QoS + + def createJobSubscription(self, callback, jobExecutionType=jobExecutionTopicType.JOB_WILDCARD_TOPIC, jobReplyType=jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId=None): + """ + **Description** + + Synchronously creates an MQTT subscription to a jobs related topic based on the provided arguments + + **Syntax** + + .. code:: python + + #Subscribe to notify-next topic to monitor change in job referred to by $next + myAWSIoTMQTTJobsClient.createJobSubscription(callback, jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC) + #Subscribe to notify topic to monitor changes to jobs in pending list + myAWSIoTMQTTJobsClient.createJobSubscription(callback, jobExecutionTopicType.JOB_NOTIFY_TOPIC) + #Subscribe to receive messages for job execution updates + myAWSIoTMQTTJobsClient.createJobSubscription(callback, jobExecutionTopicType.JOB_UPDATE_TOPIC, jobExecutionTopicReplyType.JOB_ACCEPTED_REPLY_TYPE) + #Subscribe to receive messages for describing a job execution + myAWSIoTMQTTJobsClient.createJobSubscription(callback, jobExecutionTopicType.JOB_DESCRIBE_TOPIC, jobExecutionTopicReplyType.JOB_ACCEPTED_REPLY_TYPE, jobId) + + **Parameters** + + *callback* - Function to be called when a new message for the subscribed job topic + comes in. Should be in form :code:`customCallback(client, userdata, message)`, where + :code:`message` contains :code:`topic` and :code:`payload`. Note that :code:`client` and :code:`userdata` are + here just to be aligned with the underneath Paho callback function signature. These fields are pending to be + deprecated and should not be depended on. + + *jobExecutionType* - Member of the jobExecutionTopicType class specifying the jobs topic to subscribe to + Defaults to jobExecutionTopicType.JOB_WILDCARD_TOPIC + + *jobReplyType* - Member of the jobExecutionTopicReplyType class specifying the (optional) reply sub-topic to subscribe to + Defaults to jobExecutionTopicReplyType.JOB_REQUEST_TYPE which indicates the subscription isn't intended for a jobs reply topic + + *jobId* - JobId string if the topic type requires one. + Defaults to None + + **Returns** + + True if the subscribe attempt succeeded. False if failed. + + """ + topic = self._thingJobManager.getJobTopic(jobExecutionType, jobReplyType, jobId) + return self._AWSIoTMQTTClient.subscribe(topic, self._QoS, callback) + + def createJobSubscriptionAsync(self, ackCallback, callback, jobExecutionType=jobExecutionTopicType.JOB_WILDCARD_TOPIC, jobReplyType=jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId=None): + """ + **Description** + + Asynchronously creates an MQTT subscription to a jobs related topic based on the provided arguments + + **Syntax** + + .. code:: python + + #Subscribe to notify-next topic to monitor change in job referred to by $next + myAWSIoTMQTTJobsClient.createJobSubscriptionAsync(callback, jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC) + #Subscribe to notify topic to monitor changes to jobs in pending list + myAWSIoTMQTTJobsClient.createJobSubscriptionAsync(callback, jobExecutionTopicType.JOB_NOTIFY_TOPIC) + #Subscribe to receive messages for job execution updates + myAWSIoTMQTTJobsClient.createJobSubscriptionAsync(callback, jobExecutionTopicType.JOB_UPDATE_TOPIC, jobExecutionTopicReplyType.JOB_ACCEPTED_REPLY_TYPE) + #Subscribe to receive messages for describing a job execution + myAWSIoTMQTTJobsClient.createJobSubscriptionAsync(callback, jobExecutionTopicType.JOB_DESCRIBE_TOPIC, jobExecutionTopicReplyType.JOB_ACCEPTED_REPLY_TYPE, jobId) + + **Parameters** + + *ackCallback* - Callback to be invoked when the client receives a SUBACK. Should be in form + :code:`customCallback(mid, data)`, where :code:`mid` is the packet id for the disconnect request and + :code:`data` is the granted QoS for this subscription. + + *callback* - Function to be called when a new message for the subscribed job topic + comes in. Should be in form :code:`customCallback(client, userdata, message)`, where + :code:`message` contains :code:`topic` and :code:`payload`. Note that :code:`client` and :code:`userdata` are + here just to be aligned with the underneath Paho callback function signature. These fields are pending to be + deprecated and should not be depended on. + + *jobExecutionType* - Member of the jobExecutionTopicType class specifying the jobs topic to subscribe to + Defaults to jobExecutionTopicType.JOB_WILDCARD_TOPIC + + *jobReplyType* - Member of the jobExecutionTopicReplyType class specifying the (optional) reply sub-topic to subscribe to + Defaults to jobExecutionTopicReplyType.JOB_REQUEST_TYPE which indicates the subscription isn't intended for a jobs reply topic + + *jobId* - JobId of the topic if the topic type requires one. + Defaults to None + + **Returns** + + Subscribe request packet id, for tracking purpose in the corresponding callback. + + """ + topic = self._thingJobManager.getJobTopic(jobExecutionType, jobReplyType, jobId) + return self._AWSIoTMQTTClient.subscribeAsync(topic, self._QoS, ackCallback, callback) + + def sendJobsQuery(self, jobExecTopicType, jobId=None): + """ + **Description** + + Publishes an MQTT jobs related request for a potentially specific jobId (or wildcard) + + **Syntax** + + .. code:: python + + #send a request to describe the next job + myAWSIoTMQTTJobsClient.sendJobsQuery(jobExecutionTopicType.JOB_DESCRIBE_TOPIC, '$next') + #send a request to get list of pending jobs + myAWSIoTMQTTJobsClient.sendJobsQuery(jobExecutionTopicType.JOB_GET_PENDING_TOPIC) + + **Parameters** + + *jobExecutionType* - Member of the jobExecutionTopicType class that correlates the jobs topic to publish to + + *jobId* - JobId string if the topic type requires one. + Defaults to None + + **Returns** + + True if the publish request has been sent to paho. False if the request did not reach paho. + + """ + topic = self._thingJobManager.getJobTopic(jobExecTopicType, jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId) + payload = self._thingJobManager.serializeClientTokenPayload() + return self._AWSIoTMQTTClient.publish(topic, payload, self._QoS) + + def sendJobsStartNext(self, statusDetails=None, stepTimeoutInMinutes=None): + """ + **Description** + + Publishes an MQTT message to the StartNextJobExecution topic. This will attempt to get the next pending + job execution and change its status to IN_PROGRESS. + + **Syntax** + + .. code:: python + + #Start next job (set status to IN_PROGRESS) and update with optional statusDetails + myAWSIoTMQTTJobsClient.sendJobsStartNext({'StartedBy': 'myClientId'}) + + **Parameters** + + *statusDetails* - Dictionary containing the key value pairs to use for the status details of the job execution + + *stepTimeoutInMinutes - Specifies the amount of time this device has to finish execution of this job. + + **Returns** + + True if the publish request has been sent to paho. False if the request did not reach paho. + + """ + topic = self._thingJobManager.getJobTopic(jobExecutionTopicType.JOB_START_NEXT_TOPIC, jobExecutionTopicReplyType.JOB_REQUEST_TYPE) + payload = self._thingJobManager.serializeStartNextPendingJobExecutionPayload(statusDetails, stepTimeoutInMinutes) + return self._AWSIoTMQTTClient.publish(topic, payload, self._QoS) + + def sendJobsUpdate(self, jobId, status, statusDetails=None, expectedVersion=0, executionNumber=0, includeJobExecutionState=False, includeJobDocument=False, stepTimeoutInMinutes=None): + """ + **Description** + + Publishes an MQTT message to a corresponding job execution specific topic to update its status according to the parameters. + Can be used to change a job from QUEUED to IN_PROGRESS to SUCEEDED or FAILED. + + **Syntax** + + .. code:: python + + #Update job with id 'jobId123' to succeeded state, specifying new status details, with expectedVersion=1, executionNumber=2. + #For the response, include job execution state and not the job document + myAWSIoTMQTTJobsClient.sendJobsUpdate('jobId123', jobExecutionStatus.JOB_EXECUTION_SUCCEEDED, statusDetailsMap, 1, 2, True, False) + + + #Update job with id 'jobId456' to failed state + myAWSIoTMQTTJobsClient.sendJobsUpdate('jobId456', jobExecutionStatus.JOB_EXECUTION_FAILED) + + **Parameters** + + *jobId* - JobID String of the execution to update the status of + + *status* - job execution status to change the job execution to. Member of jobExecutionStatus + + *statusDetails* - new status details to set on the job execution + + *expectedVersion* - The expected current version of the job execution. IoT jobs increments expectedVersion each time you update the job execution. + If the version of the job execution stored in Jobs does not match, the update is rejected with a VersionMismatch error, and an ErrorResponse + that contains the current job execution status data is returned. (This makes it unnecessary to perform a separate DescribeJobExecution request + n order to obtain the job execution status data.) + + *executionNumber* - A number that identifies a particular job execution on a particular device. If not specified, the latest job execution is used. + + *includeJobExecutionState* - When included and set to True, the response contains the JobExecutionState field. The default is False. + + *includeJobDocument* - When included and set to True, the response contains the JobDocument. The default is False. + + *stepTimeoutInMinutes - Specifies the amount of time this device has to finish execution of this job. + + **Returns** + + True if the publish request has been sent to paho. False if the request did not reach paho. + + """ + topic = self._thingJobManager.getJobTopic(jobExecutionTopicType.JOB_UPDATE_TOPIC, jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId) + payload = self._thingJobManager.serializeJobExecutionUpdatePayload(status, statusDetails, expectedVersion, executionNumber, includeJobExecutionState, includeJobDocument, stepTimeoutInMinutes) + return self._AWSIoTMQTTClient.publish(topic, payload, self._QoS) + + def sendJobsDescribe(self, jobId, executionNumber=0, includeJobDocument=True): + """ + **Description** + + Publishes a method to the describe topic for a particular job. + + **Syntax** + + .. code:: python + + #Describe job with id 'jobId1' of any executionNumber, job document will be included in response + myAWSIoTMQTTJobsClient.sendJobsDescribe('jobId1') + + #Describe job with id 'jobId2', with execution number of 2, and includeJobDocument in the response + myAWSIoTMQTTJobsClient.sendJobsDescribe('jobId2', 2, True) + + **Parameters** + + *jobId* - jobID to describe. This is allowed to be a wildcard such as '$next' + + *executionNumber* - A number that identifies a particular job execution on a particular device. If not specified, the latest job execution is used. + + *includeJobDocument* - When included and set to True, the response contains the JobDocument. + + **Returns** + + True if the publish request has been sent to paho. False if the request did not reach paho. + + """ + topic = self._thingJobManager.getJobTopic(jobExecutionTopicType.JOB_DESCRIBE_TOPIC, jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId) + payload = self._thingJobManager.serializeDescribeJobExecutionPayload(executionNumber, includeJobDocument) + return self._AWSIoTMQTTClient.publish(topic, payload, self._QoS) diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/__init__.py new file mode 100644 index 0000000..eda1560 --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/__init__.py @@ -0,0 +1,3 @@ +__version__ = "1.4.8" + + diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/greengrass/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/greengrass/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/greengrass/discovery/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/greengrass/discovery/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/greengrass/discovery/models.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/greengrass/discovery/models.py new file mode 100644 index 0000000..ed8256d --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/greengrass/discovery/models.py @@ -0,0 +1,466 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import json + + +KEY_GROUP_LIST = "GGGroups" +KEY_GROUP_ID = "GGGroupId" +KEY_CORE_LIST = "Cores" +KEY_CORE_ARN = "thingArn" +KEY_CA_LIST = "CAs" +KEY_CONNECTIVITY_INFO_LIST = "Connectivity" +KEY_CONNECTIVITY_INFO_ID = "Id" +KEY_HOST_ADDRESS = "HostAddress" +KEY_PORT_NUMBER = "PortNumber" +KEY_METADATA = "Metadata" + + +class ConnectivityInfo(object): + """ + + Class the stores one set of the connectivity information. + This is the data model for easy access to the discovery information from the discovery request function call. No + need to call directly from user scripts. + + """ + + def __init__(self, id, host, port, metadata): + self._id = id + self._host = host + self._port = port + self._metadata = metadata + + @property + def id(self): + """ + + Connectivity Information Id. + + """ + return self._id + + @property + def host(self): + """ + + Host address. + + """ + return self._host + + @property + def port(self): + """ + + Port number. + + """ + return self._port + + @property + def metadata(self): + """ + + Metadata string. + + """ + return self._metadata + + +class CoreConnectivityInfo(object): + """ + + Class that stores the connectivity information for a Greengrass core. + This is the data model for easy access to the discovery information from the discovery request function call. No + need to call directly from user scripts. + + """ + + def __init__(self, coreThingArn, groupId): + self._core_thing_arn = coreThingArn + self._group_id = groupId + self._connectivity_info_dict = dict() + + @property + def coreThingArn(self): + """ + + Thing arn for this Greengrass core. + + """ + return self._core_thing_arn + + @property + def groupId(self): + """ + + Greengrass group id that this Greengrass core belongs to. + + """ + return self._group_id + + @property + def connectivityInfoList(self): + """ + + The list of connectivity information that this Greengrass core has. + + """ + return list(self._connectivity_info_dict.values()) + + def getConnectivityInfo(self, id): + """ + + **Description** + + Used for quickly accessing a certain set of connectivity information by id. + + **Syntax** + + .. code:: python + + myCoreConnectivityInfo.getConnectivityInfo("CoolId") + + **Parameters** + + *id* - The id for the desired connectivity information. + + **Return** + + :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.ConnectivityInfo` object. + + """ + return self._connectivity_info_dict.get(id) + + def appendConnectivityInfo(self, connectivityInfo): + """ + + **Description** + + Used for adding a new set of connectivity information to the list for this Greengrass core. This is used by the + SDK internally. No need to call directly from user scripts. + + **Syntax** + + .. code:: python + + myCoreConnectivityInfo.appendConnectivityInfo(newInfo) + + **Parameters** + + *connectivityInfo* - :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.ConnectivityInfo` object. + + **Returns** + + None + + """ + self._connectivity_info_dict[connectivityInfo.id] = connectivityInfo + + +class GroupConnectivityInfo(object): + """ + + Class that stores the connectivity information for a specific Greengrass group. + This is the data model for easy access to the discovery information from the discovery request function call. No + need to call directly from user scripts. + + """ + def __init__(self, groupId): + self._group_id = groupId + self._core_connectivity_info_dict = dict() + self._ca_list = list() + + @property + def groupId(self): + """ + + Id for this Greengrass group. + + """ + return self._group_id + + @property + def coreConnectivityInfoList(self): + """ + + A list of Greengrass cores + (:code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` object) that belong to this + Greengrass group. + + """ + return list(self._core_connectivity_info_dict.values()) + + @property + def caList(self): + """ + + A list of CA content strings for this Greengrass group. + + """ + return self._ca_list + + def getCoreConnectivityInfo(self, coreThingArn): + """ + + **Description** + + Used to retrieve the corresponding :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` + object by core thing arn. + + **Syntax** + + .. code:: python + + myGroupConnectivityInfo.getCoreConnectivityInfo("YourOwnArnString") + + **Parameters** + + coreThingArn - Thing arn for the desired Greengrass core. + + **Returns** + + :code:`AWSIoTPythonSDK.core.greengrass.discovery.CoreConnectivityInfo` object. + + """ + return self._core_connectivity_info_dict.get(coreThingArn) + + def appendCoreConnectivityInfo(self, coreConnectivityInfo): + """ + + **Description** + + Used to append new core connectivity information to this group connectivity information. This is used by the + SDK internally. No need to call directly from user scripts. + + **Syntax** + + .. code:: python + + myGroupConnectivityInfo.appendCoreConnectivityInfo(newCoreConnectivityInfo) + + **Parameters** + + *coreConnectivityInfo* - :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` object. + + **Returns** + + None + + """ + self._core_connectivity_info_dict[coreConnectivityInfo.coreThingArn] = coreConnectivityInfo + + def appendCa(self, ca): + """ + + **Description** + + Used to append new CA content string to this group connectivity information. This is used by the SDK internally. + No need to call directly from user scripts. + + **Syntax** + + .. code:: python + + myGroupConnectivityInfo.appendCa("CaContentString") + + **Parameters** + + *ca* - Group CA content string. + + **Returns** + + None + + """ + self._ca_list.append(ca) + + +class DiscoveryInfo(object): + """ + + Class that stores the discovery information coming back from the discovery request. + This is the data model for easy access to the discovery information from the discovery request function call. No + need to call directly from user scripts. + + """ + def __init__(self, rawJson): + self._raw_json = rawJson + + @property + def rawJson(self): + """ + + JSON response string that contains the discovery information. This is reserved in case users want to do + some process by themselves. + + """ + return self._raw_json + + def getAllCores(self): + """ + + **Description** + + Used to retrieve the list of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivityInfo` + object for this discovery information. The retrieved cores could be from different Greengrass groups. This is + designed for uses who want to iterate through all available cores at the same time, regardless of which group + those cores are in. + + **Syntax** + + .. code:: python + + myDiscoveryInfo.getAllCores() + + **Parameters** + + None + + **Returns** + + List of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.CoreConnectivtyInfo` object. + + """ + groups_list = self.getAllGroups() + core_list = list() + + for group in groups_list: + core_list.extend(group.coreConnectivityInfoList) + + return core_list + + def getAllCas(self): + """ + + **Description** + + Used to retrieve the list of :code:`(groupId, caContent)` pair for this discovery information. The retrieved + pairs could be from different Greengrass groups. This is designed for users who want to iterate through all + available cores/groups/CAs at the same time, regardless of which group those CAs belong to. + + **Syntax** + + .. code:: python + + myDiscoveryInfo.getAllCas() + + **Parameters** + + None + + **Returns** + + List of :code:`(groupId, caContent)` string pair, where :code:`caContent` is the CA content string and + :code:`groupId` is the group id that this CA belongs to. + + """ + group_list = self.getAllGroups() + ca_list = list() + + for group in group_list: + for ca in group.caList: + ca_list.append((group.groupId, ca)) + + return ca_list + + def getAllGroups(self): + """ + + **Description** + + Used to retrieve the list of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo` + object for this discovery information. This is designed for users who want to iterate through all available + groups that this Greengrass aware device (GGAD) belongs to. + + **Syntax** + + .. code:: python + + myDiscoveryInfo.getAllGroups() + + **Parameters** + + None + + **Returns** + + List of :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo` object. + + """ + groups_dict = self.toObjectAtGroupLevel() + return list(groups_dict.values()) + + def toObjectAtGroupLevel(self): + """ + + **Description** + + Used to get a dictionary of Greengrass group discovery information, with group id string as key and the + corresponding :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.GroupConnectivityInfo` object as the + value. This is designed for users who know exactly which group, which core and which set of connectivity info + they want to use for the Greengrass aware device to connect. + + **Syntax** + + .. code:: python + + # Get to the targeted connectivity information for a specific core in a specific group + groupLevelDiscoveryInfoObj = myDiscoveryInfo.toObjectAtGroupLevel() + groupConnectivityInfoObj = groupLevelDiscoveryInfoObj.toObjectAtGroupLevel("IKnowMyGroupId") + coreConnectivityInfoObj = groupConnectivityInfoObj.getCoreConnectivityInfo("IKnowMyCoreThingArn") + connectivityInfo = coreConnectivityInfoObj.getConnectivityInfo("IKnowMyConnectivityInfoSetId") + # Now retrieve the detailed information + caList = groupConnectivityInfoObj.caList + host = connectivityInfo.host + port = connectivityInfo.port + metadata = connectivityInfo.metadata + # Actual connecting logic follows... + + """ + groups_object = json.loads(self._raw_json) + groups_dict = dict() + + for group_object in groups_object[KEY_GROUP_LIST]: + group_info = self._decode_group_info(group_object) + groups_dict[group_info.groupId] = group_info + + return groups_dict + + def _decode_group_info(self, group_object): + group_id = group_object[KEY_GROUP_ID] + group_info = GroupConnectivityInfo(group_id) + + for core in group_object[KEY_CORE_LIST]: + core_info = self._decode_core_info(core, group_id) + group_info.appendCoreConnectivityInfo(core_info) + + for ca in group_object[KEY_CA_LIST]: + group_info.appendCa(ca) + + return group_info + + def _decode_core_info(self, core_object, group_id): + core_info = CoreConnectivityInfo(core_object[KEY_CORE_ARN], group_id) + + for connectivity_info_object in core_object[KEY_CONNECTIVITY_INFO_LIST]: + connectivity_info = ConnectivityInfo(connectivity_info_object[KEY_CONNECTIVITY_INFO_ID], + connectivity_info_object[KEY_HOST_ADDRESS], + connectivity_info_object[KEY_PORT_NUMBER], + connectivity_info_object.get(KEY_METADATA,'')) + core_info.appendConnectivityInfo(connectivity_info) + + return core_info diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/greengrass/discovery/providers.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/greengrass/discovery/providers.py new file mode 100644 index 0000000..646d79d --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/greengrass/discovery/providers.py @@ -0,0 +1,426 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryInvalidRequestException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryUnauthorizedException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryDataNotFoundException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryThrottlingException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryTimeoutException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryFailure +from AWSIoTPythonSDK.core.greengrass.discovery.models import DiscoveryInfo +from AWSIoTPythonSDK.core.protocol.connection.alpn import SSLContextBuilder +import re +import sys +import ssl +import time +import errno +import logging +import socket +import platform +if platform.system() == 'Windows': + EAGAIN = errno.WSAEWOULDBLOCK +else: + EAGAIN = errno.EAGAIN + + +class DiscoveryInfoProvider(object): + + REQUEST_TYPE_PREFIX = "GET " + PAYLOAD_PREFIX = "/greengrass/discover/thing/" + PAYLOAD_SUFFIX = " HTTP/1.1\r\n" # Space in the front + HOST_PREFIX = "Host: " + HOST_SUFFIX = "\r\n\r\n" + HTTP_PROTOCOL = r"HTTP/1.1 " + CONTENT_LENGTH = r"content-length: " + CONTENT_LENGTH_PATTERN = CONTENT_LENGTH + r"([0-9]+)\r\n" + HTTP_RESPONSE_CODE_PATTERN = HTTP_PROTOCOL + r"([0-9]+) " + + HTTP_SC_200 = "200" + HTTP_SC_400 = "400" + HTTP_SC_401 = "401" + HTTP_SC_404 = "404" + HTTP_SC_429 = "429" + + LOW_LEVEL_RC_COMPLETE = 0 + LOW_LEVEL_RC_TIMEOUT = -1 + + _logger = logging.getLogger(__name__) + + def __init__(self, caPath="", certPath="", keyPath="", host="", port=8443, timeoutSec=120): + """ + + The class that provides functionality to perform a Greengrass discovery process to the cloud. + + Users can perform Greengrass discovery process for a specific Greengrass aware device to retrieve + connectivity/identity information of Greengrass cores within the same group. + + **Syntax** + + .. code:: python + + from AWSIoTPythonSDK.core.greengrass.discovery.providers import DiscoveryInfoProvider + + # Create a discovery information provider + myDiscoveryInfoProvider = DiscoveryInfoProvider() + # Create a discovery information provider with custom configuration + myDiscoveryInfoProvider = DiscoveryInfoProvider(caPath=myCAPath, certPath=myCertPath, keyPath=myKeyPath, host=myHost, timeoutSec=myTimeoutSec) + + **Parameters** + + *caPath* - Path to read the root CA file. + + *certPath* - Path to read the certificate file. + + *keyPath* - Path to read the private key file. + + *host* - String that denotes the host name of the user-specific AWS IoT endpoint. + + *port* - Integer that denotes the port number to connect to. For discovery purpose, it is 8443 by default. + + *timeoutSec* - Time out configuration in seconds to consider a discovery request sending/response waiting has + been timed out. + + **Returns** + + AWSIoTPythonSDK.core.greengrass.discovery.providers.DiscoveryInfoProvider object + + """ + self._ca_path = caPath + self._cert_path = certPath + self._key_path = keyPath + self._host = host + self._port = port + self._timeout_sec = timeoutSec + self._expected_exception_map = { + self.HTTP_SC_400 : DiscoveryInvalidRequestException(), + self.HTTP_SC_401 : DiscoveryUnauthorizedException(), + self.HTTP_SC_404 : DiscoveryDataNotFoundException(), + self.HTTP_SC_429 : DiscoveryThrottlingException() + } + + def configureEndpoint(self, host, port=8443): + """ + + **Description** + + Used to configure the host address and port number for the discovery request to hit. Should be called before + the discovery request happens. + + **Syntax** + + .. code:: python + + # Using default port configuration, 8443 + myDiscoveryInfoProvider.configureEndpoint(host="prefix.iot.us-east-1.amazonaws.com") + # Customize port configuration + myDiscoveryInfoProvider.configureEndpoint(host="prefix.iot.us-east-1.amazonaws.com", port=8888) + + **Parameters** + + *host* - String that denotes the host name of the user-specific AWS IoT endpoint. + + *port* - Integer that denotes the port number to connect to. For discovery purpose, it is 8443 by default. + + **Returns** + + None + + """ + self._host = host + self._port = port + + def configureCredentials(self, caPath, certPath, keyPath): + """ + + **Description** + + Used to configure the credentials for discovery request. Should be called before the discovery request happens. + + **Syntax** + + .. code:: python + + myDiscoveryInfoProvider.configureCredentials("my/ca/path", "my/cert/path", "my/key/path") + + **Parameters** + + *caPath* - Path to read the root CA file. + + *certPath* - Path to read the certificate file. + + *keyPath* - Path to read the private key file. + + **Returns** + + None + + """ + self._ca_path = caPath + self._cert_path = certPath + self._key_path = keyPath + + def configureTimeout(self, timeoutSec): + """ + + **Description** + + Used to configure the time out in seconds for discovery request sending/response waiting. Should be called before + the discovery request happens. + + **Syntax** + + .. code:: python + + # Configure the time out for discovery to be 10 seconds + myDiscoveryInfoProvider.configureTimeout(10) + + **Parameters** + + *timeoutSec* - Time out configuration in seconds to consider a discovery request sending/response waiting has + been timed out. + + **Returns** + + None + + """ + self._timeout_sec = timeoutSec + + def discover(self, thingName): + """ + + **Description** + + Perform the discovery request for the given Greengrass aware device thing name. + + **Syntax** + + .. code:: python + + myDiscoveryInfoProvider.discover(thingName="myGGAD") + + **Parameters** + + *thingName* - Greengrass aware device thing name. + + **Returns** + + :code:`AWSIoTPythonSDK.core.greengrass.discovery.models.DiscoveryInfo` object. + + """ + self._logger.info("Starting discover request...") + self._logger.info("Endpoint: " + self._host + ":" + str(self._port)) + self._logger.info("Target thing: " + thingName) + sock = self._create_tcp_connection() + ssl_sock = self._create_ssl_connection(sock) + self._raise_on_timeout(self._send_discovery_request(ssl_sock, thingName)) + status_code, response_body = self._receive_discovery_response(ssl_sock) + + return self._raise_if_not_200(status_code, response_body) + + def _create_tcp_connection(self): + self._logger.debug("Creating tcp connection...") + try: + if (sys.version_info[0] == 2 and sys.version_info[1] < 7) or (sys.version_info[0] == 3 and sys.version_info[1] < 2): + sock = socket.create_connection((self._host, self._port)) + else: + sock = socket.create_connection((self._host, self._port), source_address=("", 0)) + return sock + except socket.error as err: + if err.errno != errno.EINPROGRESS and err.errno != errno.EWOULDBLOCK and err.errno != EAGAIN: + raise + self._logger.debug("Created tcp connection.") + + def _create_ssl_connection(self, sock): + self._logger.debug("Creating ssl connection...") + + ssl_protocol_version = ssl.PROTOCOL_SSLv23 + + if self._port == 443: + ssl_context = SSLContextBuilder()\ + .with_ca_certs(self._ca_path)\ + .with_cert_key_pair(self._cert_path, self._key_path)\ + .with_cert_reqs(ssl.CERT_REQUIRED)\ + .with_check_hostname(True)\ + .with_ciphers(None)\ + .with_alpn_protocols(['x-amzn-http-ca'])\ + .build() + ssl_sock = ssl_context.wrap_socket(sock, server_hostname=self._host, do_handshake_on_connect=False) + ssl_sock.do_handshake() + else: + ssl_sock = ssl.wrap_socket(sock, + certfile=self._cert_path, + keyfile=self._key_path, + ca_certs=self._ca_path, + cert_reqs=ssl.CERT_REQUIRED, + ssl_version=ssl_protocol_version) + + self._logger.debug("Matching host name...") + if sys.version_info[0] < 3 or (sys.version_info[0] == 3 and sys.version_info[1] < 2): + self._tls_match_hostname(ssl_sock) + else: + ssl.match_hostname(ssl_sock.getpeercert(), self._host) + + return ssl_sock + + def _tls_match_hostname(self, ssl_sock): + try: + cert = ssl_sock.getpeercert() + except AttributeError: + # the getpeercert can throw Attribute error: object has no attribute 'peer_certificate' + # Don't let that crash the whole client. See also: http://bugs.python.org/issue13721 + raise ssl.SSLError('Not connected') + + san = cert.get('subjectAltName') + if san: + have_san_dns = False + for (key, value) in san: + if key == 'DNS': + have_san_dns = True + if self._host_matches_cert(self._host.lower(), value.lower()) == True: + return + if key == 'IP Address': + have_san_dns = True + if value.lower() == self._host.lower(): + return + + if have_san_dns: + # Only check subject if subjectAltName dns not found. + raise ssl.SSLError('Certificate subject does not match remote hostname.') + subject = cert.get('subject') + if subject: + for ((key, value),) in subject: + if key == 'commonName': + if self._host_matches_cert(self._host.lower(), value.lower()) == True: + return + + raise ssl.SSLError('Certificate subject does not match remote hostname.') + + def _host_matches_cert(self, host, cert_host): + if cert_host[0:2] == "*.": + if cert_host.count("*") != 1: + return False + + host_match = host.split(".", 1)[1] + cert_match = cert_host.split(".", 1)[1] + if host_match == cert_match: + return True + else: + return False + else: + if host == cert_host: + return True + else: + return False + + def _send_discovery_request(self, ssl_sock, thing_name): + request = self.REQUEST_TYPE_PREFIX + \ + self.PAYLOAD_PREFIX + \ + thing_name + \ + self.PAYLOAD_SUFFIX + \ + self.HOST_PREFIX + \ + self._host + ":" + str(self._port) + \ + self.HOST_SUFFIX + self._logger.debug("Sending discover request: " + request) + + start_time = time.time() + desired_length_to_write = len(request) + actual_length_written = 0 + while True: + try: + length_written = ssl_sock.write(request.encode("utf-8")) + actual_length_written += length_written + except socket.error as err: + if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE: + pass + if actual_length_written == desired_length_to_write: + return self.LOW_LEVEL_RC_COMPLETE + if start_time + self._timeout_sec < time.time(): + return self.LOW_LEVEL_RC_TIMEOUT + + def _receive_discovery_response(self, ssl_sock): + self._logger.debug("Receiving discover response header...") + rc1, response_header = self._receive_until(ssl_sock, self._got_two_crlfs) + status_code, body_length = self._handle_discovery_response_header(rc1, response_header.decode("utf-8")) + + self._logger.debug("Receiving discover response body...") + rc2, response_body = self._receive_until(ssl_sock, self._got_enough_bytes, body_length) + response_body = self._handle_discovery_response_body(rc2, response_body.decode("utf-8")) + + return status_code, response_body + + def _receive_until(self, ssl_sock, criteria_function, extra_data=None): + start_time = time.time() + response = bytearray() + number_bytes_read = 0 + while True: # Python does not have do-while + try: + response.append(self._convert_to_int_py3(ssl_sock.read(1))) + number_bytes_read += 1 + except socket.error as err: + if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE: + pass + + if criteria_function((number_bytes_read, response, extra_data)): + return self.LOW_LEVEL_RC_COMPLETE, response + if start_time + self._timeout_sec < time.time(): + return self.LOW_LEVEL_RC_TIMEOUT, response + + def _convert_to_int_py3(self, input_char): + try: + return ord(input_char) + except: + return input_char + + def _got_enough_bytes(self, data): + number_bytes_read, response, target_length = data + return number_bytes_read == int(target_length) + + def _got_two_crlfs(self, data): + number_bytes_read, response, extra_data_unused = data + number_of_crlf = 2 + has_enough_bytes = number_bytes_read > number_of_crlf * 2 - 1 + if has_enough_bytes: + end_of_received = response[number_bytes_read - number_of_crlf * 2 : number_bytes_read] + expected_end_of_response = b"\r\n" * number_of_crlf + return end_of_received == expected_end_of_response + else: + return False + + def _handle_discovery_response_header(self, rc, response): + self._raise_on_timeout(rc) + http_status_code_matcher = re.compile(self.HTTP_RESPONSE_CODE_PATTERN) + http_status_code_matched_groups = http_status_code_matcher.match(response) + content_length_matcher = re.compile(self.CONTENT_LENGTH_PATTERN) + content_length_matched_groups = content_length_matcher.search(response) + return http_status_code_matched_groups.group(1), content_length_matched_groups.group(1) + + def _handle_discovery_response_body(self, rc, response): + self._raise_on_timeout(rc) + return response + + def _raise_on_timeout(self, rc): + if rc == self.LOW_LEVEL_RC_TIMEOUT: + raise DiscoveryTimeoutException() + + def _raise_if_not_200(self, status_code, response_body): # response_body here is str in Py3 + if status_code != self.HTTP_SC_200: + expected_exception = self._expected_exception_map.get(status_code) + if expected_exception: + raise expected_exception + else: + raise DiscoveryFailure(response_body) + return DiscoveryInfo(response_body) diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/jobs/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/jobs/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/jobs/thingJobManager.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/jobs/thingJobManager.py new file mode 100644 index 0000000..d2396b2 --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/jobs/thingJobManager.py @@ -0,0 +1,156 @@ +# /* +# * Copyright 2010-2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import json + +_BASE_THINGS_TOPIC = "$aws/things/" +_NOTIFY_OPERATION = "notify" +_NOTIFY_NEXT_OPERATION = "notify-next" +_GET_OPERATION = "get" +_START_NEXT_OPERATION = "start-next" +_WILDCARD_OPERATION = "+" +_UPDATE_OPERATION = "update" +_ACCEPTED_REPLY = "accepted" +_REJECTED_REPLY = "rejected" +_WILDCARD_REPLY = "#" + +#Members of this enum are tuples +_JOB_ID_REQUIRED_INDEX = 1 +_JOB_OPERATION_INDEX = 2 + +_STATUS_KEY = 'status' +_STATUS_DETAILS_KEY = 'statusDetails' +_EXPECTED_VERSION_KEY = 'expectedVersion' +_EXEXCUTION_NUMBER_KEY = 'executionNumber' +_INCLUDE_JOB_EXECUTION_STATE_KEY = 'includeJobExecutionState' +_INCLUDE_JOB_DOCUMENT_KEY = 'includeJobDocument' +_CLIENT_TOKEN_KEY = 'clientToken' +_STEP_TIMEOUT_IN_MINUTES_KEY = 'stepTimeoutInMinutes' + +#The type of job topic. +class jobExecutionTopicType(object): + JOB_UNRECOGNIZED_TOPIC = (0, False, '') + JOB_GET_PENDING_TOPIC = (1, False, _GET_OPERATION) + JOB_START_NEXT_TOPIC = (2, False, _START_NEXT_OPERATION) + JOB_DESCRIBE_TOPIC = (3, True, _GET_OPERATION) + JOB_UPDATE_TOPIC = (4, True, _UPDATE_OPERATION) + JOB_NOTIFY_TOPIC = (5, False, _NOTIFY_OPERATION) + JOB_NOTIFY_NEXT_TOPIC = (6, False, _NOTIFY_NEXT_OPERATION) + JOB_WILDCARD_TOPIC = (7, False, _WILDCARD_OPERATION) + +#Members of this enum are tuples +_JOB_SUFFIX_INDEX = 1 +#The type of reply topic, or #JOB_REQUEST_TYPE for topics that are not replies. +class jobExecutionTopicReplyType(object): + JOB_UNRECOGNIZED_TOPIC_TYPE = (0, '') + JOB_REQUEST_TYPE = (1, '') + JOB_ACCEPTED_REPLY_TYPE = (2, '/' + _ACCEPTED_REPLY) + JOB_REJECTED_REPLY_TYPE = (3, '/' + _REJECTED_REPLY) + JOB_WILDCARD_REPLY_TYPE = (4, '/' + _WILDCARD_REPLY) + +_JOB_STATUS_INDEX = 1 +class jobExecutionStatus(object): + JOB_EXECUTION_STATUS_NOT_SET = (0, None) + JOB_EXECUTION_QUEUED = (1, 'QUEUED') + JOB_EXECUTION_IN_PROGRESS = (2, 'IN_PROGRESS') + JOB_EXECUTION_FAILED = (3, 'FAILED') + JOB_EXECUTION_SUCCEEDED = (4, 'SUCCEEDED') + JOB_EXECUTION_CANCELED = (5, 'CANCELED') + JOB_EXECUTION_REJECTED = (6, 'REJECTED') + JOB_EXECUTION_UNKNOWN_STATUS = (99, None) + +def _getExecutionStatus(jobStatus): + try: + return jobStatus[_JOB_STATUS_INDEX] + except KeyError: + return None + +def _isWithoutJobIdTopicType(srcJobExecTopicType): + return (srcJobExecTopicType == jobExecutionTopicType.JOB_GET_PENDING_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_START_NEXT_TOPIC + or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC) + +class thingJobManager: + def __init__(self, thingName, clientToken = None): + self._thingName = thingName + self._clientToken = clientToken + + def getJobTopic(self, srcJobExecTopicType, srcJobExecTopicReplyType=jobExecutionTopicReplyType.JOB_REQUEST_TYPE, jobId=None): + if self._thingName is None: + return None + + #Verify topics that only support request type, actually have request type specified for reply + if (srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_TOPIC or srcJobExecTopicType == jobExecutionTopicType.JOB_NOTIFY_NEXT_TOPIC) and srcJobExecTopicReplyType != jobExecutionTopicReplyType.JOB_REQUEST_TYPE: + return None + + #Verify topics that explicitly do not want a job ID do not have one specified + if (jobId is not None and _isWithoutJobIdTopicType(srcJobExecTopicType)): + return None + + #Verify job ID is present if the topic requires one + if jobId is None and srcJobExecTopicType[_JOB_ID_REQUIRED_INDEX]: + return None + + #Ensure the job operation is a non-empty string + if srcJobExecTopicType[_JOB_OPERATION_INDEX] == '': + return None + + if srcJobExecTopicType[_JOB_ID_REQUIRED_INDEX]: + return '{0}{1}/jobs/{2}/{3}{4}'.format(_BASE_THINGS_TOPIC, self._thingName, str(jobId), srcJobExecTopicType[_JOB_OPERATION_INDEX], srcJobExecTopicReplyType[_JOB_SUFFIX_INDEX]) + elif srcJobExecTopicType == jobExecutionTopicType.JOB_WILDCARD_TOPIC: + return '{0}{1}/jobs/#'.format(_BASE_THINGS_TOPIC, self._thingName) + else: + return '{0}{1}/jobs/{2}{3}'.format(_BASE_THINGS_TOPIC, self._thingName, srcJobExecTopicType[_JOB_OPERATION_INDEX], srcJobExecTopicReplyType[_JOB_SUFFIX_INDEX]) + + def serializeJobExecutionUpdatePayload(self, status, statusDetails=None, expectedVersion=0, executionNumber=0, includeJobExecutionState=False, includeJobDocument=False, stepTimeoutInMinutes=None): + executionStatus = _getExecutionStatus(status) + if executionStatus is None: + return None + payload = {_STATUS_KEY: executionStatus} + if statusDetails: + payload[_STATUS_DETAILS_KEY] = statusDetails + if expectedVersion > 0: + payload[_EXPECTED_VERSION_KEY] = str(expectedVersion) + if executionNumber > 0: + payload[_EXEXCUTION_NUMBER_KEY] = str(executionNumber) + if includeJobExecutionState: + payload[_INCLUDE_JOB_EXECUTION_STATE_KEY] = True + if includeJobDocument: + payload[_INCLUDE_JOB_DOCUMENT_KEY] = True + if self._clientToken is not None: + payload[_CLIENT_TOKEN_KEY] = self._clientToken + if stepTimeoutInMinutes is not None: + payload[_STEP_TIMEOUT_IN_MINUTES_KEY] = stepTimeoutInMinutes + return json.dumps(payload) + + def serializeDescribeJobExecutionPayload(self, executionNumber=0, includeJobDocument=True): + payload = {_INCLUDE_JOB_DOCUMENT_KEY: includeJobDocument} + if executionNumber > 0: + payload[_EXEXCUTION_NUMBER_KEY] = executionNumber + if self._clientToken is not None: + payload[_CLIENT_TOKEN_KEY] = self._clientToken + return json.dumps(payload) + + def serializeStartNextPendingJobExecutionPayload(self, statusDetails=None, stepTimeoutInMinutes=None): + payload = {} + if self._clientToken is not None: + payload[_CLIENT_TOKEN_KEY] = self._clientToken + if statusDetails is not None: + payload[_STATUS_DETAILS_KEY] = statusDetails + if stepTimeoutInMinutes is not None: + payload[_STEP_TIMEOUT_IN_MINUTES_KEY] = stepTimeoutInMinutes + return json.dumps(payload) + + def serializeClientTokenPayload(self): + return json.dumps({_CLIENT_TOKEN_KEY: self._clientToken}) if self._clientToken is not None else '{}' diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/connection/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/connection/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/connection/alpn.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/connection/alpn.py new file mode 100644 index 0000000..8da98dd --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/connection/alpn.py @@ -0,0 +1,63 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +try: + import ssl +except: + ssl = None + + +class SSLContextBuilder(object): + + def __init__(self): + self.check_supportability() + self._ssl_context = ssl.create_default_context() + + def check_supportability(self): + if ssl is None: + raise RuntimeError("This platform has no SSL/TLS.") + if not hasattr(ssl, "SSLContext"): + raise NotImplementedError("This platform does not support SSLContext. Python 2.7.10+/3.5+ is required.") + if not hasattr(ssl.SSLContext, "set_alpn_protocols"): + raise NotImplementedError("This platform does not support ALPN as TLS extensions. Python 2.7.10+/3.5+ is required.") + + def with_ca_certs(self, ca_certs): + self._ssl_context.load_verify_locations(ca_certs) + return self + + def with_cert_key_pair(self, cert_file, key_file): + self._ssl_context.load_cert_chain(cert_file, key_file) + return self + + def with_cert_reqs(self, cert_reqs): + self._ssl_context.verify_mode = cert_reqs + return self + + def with_check_hostname(self, check_hostname): + self._ssl_context.check_hostname = check_hostname + return self + + def with_ciphers(self, ciphers): + if ciphers is not None: + self._ssl_context.set_ciphers(ciphers) # set_ciphers() does not allow None input. Use default (do nothing) if None + return self + + def with_alpn_protocols(self, alpn_protocols): + self._ssl_context.set_alpn_protocols(alpn_protocols) + return self + + def build(self): + return self._ssl_context diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/connection/cores.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/connection/cores.py new file mode 100644 index 0000000..df12470 --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/connection/cores.py @@ -0,0 +1,699 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +# This class implements the progressive backoff logic for auto-reconnect. +# It manages the reconnect wait time for the current reconnect, controling +# when to increase it and when to reset it. + + +import re +import sys +import ssl +import errno +import struct +import socket +import base64 +import time +import threading +import logging +import os +from datetime import datetime +import hashlib +import hmac +from AWSIoTPythonSDK.exception.AWSIoTExceptions import ClientError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import wssNoKeyInEnvironmentError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import wssHandShakeError +from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC +try: + from urllib.parse import quote # Python 3+ +except ImportError: + from urllib import quote +# INI config file handling +try: + from configparser import ConfigParser # Python 3+ + from configparser import NoOptionError + from configparser import NoSectionError +except ImportError: + from ConfigParser import ConfigParser + from ConfigParser import NoOptionError + from ConfigParser import NoSectionError + + +class ProgressiveBackOffCore: + # Logger + _logger = logging.getLogger(__name__) + + def __init__(self, srcBaseReconnectTimeSecond=1, srcMaximumReconnectTimeSecond=32, srcMinimumConnectTimeSecond=20): + # The base reconnection time in seconds, default 1 + self._baseReconnectTimeSecond = srcBaseReconnectTimeSecond + # The maximum reconnection time in seconds, default 32 + self._maximumReconnectTimeSecond = srcMaximumReconnectTimeSecond + # The minimum time in milliseconds that a connection must be maintained in order to be considered stable + # Default 20 + self._minimumConnectTimeSecond = srcMinimumConnectTimeSecond + # Current backOff time in seconds, init to equal to 0 + self._currentBackoffTimeSecond = 1 + # Handler for timer + self._resetBackoffTimer = None + + # For custom progressiveBackoff timing configuration + def configTime(self, srcBaseReconnectTimeSecond, srcMaximumReconnectTimeSecond, srcMinimumConnectTimeSecond): + if srcBaseReconnectTimeSecond < 0 or srcMaximumReconnectTimeSecond < 0 or srcMinimumConnectTimeSecond < 0: + self._logger.error("init: Negative time configuration detected.") + raise ValueError("Negative time configuration detected.") + if srcBaseReconnectTimeSecond >= srcMinimumConnectTimeSecond: + self._logger.error("init: Min connect time should be bigger than base reconnect time.") + raise ValueError("Min connect time should be bigger than base reconnect time.") + self._baseReconnectTimeSecond = srcBaseReconnectTimeSecond + self._maximumReconnectTimeSecond = srcMaximumReconnectTimeSecond + self._minimumConnectTimeSecond = srcMinimumConnectTimeSecond + self._currentBackoffTimeSecond = 1 + + # Block the reconnect logic for _currentBackoffTimeSecond + # Update the currentBackoffTimeSecond for the next reconnect + # Cancel the in-waiting timer for resetting backOff time + # This should get called only when a disconnect/reconnect happens + def backOff(self): + self._logger.debug("backOff: current backoff time is: " + str(self._currentBackoffTimeSecond) + " sec.") + if self._resetBackoffTimer is not None: + # Cancel the timer + self._resetBackoffTimer.cancel() + # Block the reconnect logic + time.sleep(self._currentBackoffTimeSecond) + # Update the backoff time + if self._currentBackoffTimeSecond == 0: + # This is the first attempt to connect, set it to base + self._currentBackoffTimeSecond = self._baseReconnectTimeSecond + else: + # r_cur = min(2^n*r_base, r_max) + self._currentBackoffTimeSecond = min(self._maximumReconnectTimeSecond, self._currentBackoffTimeSecond * 2) + + # Start the timer for resetting _currentBackoffTimeSecond + # Will be cancelled upon calling backOff + def startStableConnectionTimer(self): + self._resetBackoffTimer = threading.Timer(self._minimumConnectTimeSecond, + self._connectionStableThenResetBackoffTime) + self._resetBackoffTimer.start() + + def stopStableConnectionTimer(self): + if self._resetBackoffTimer is not None: + # Cancel the timer + self._resetBackoffTimer.cancel() + + # Timer callback to reset _currentBackoffTimeSecond + # If the connection is stable for longer than _minimumConnectTimeSecond, + # reset the currentBackoffTimeSecond to _baseReconnectTimeSecond + def _connectionStableThenResetBackoffTime(self): + self._logger.debug( + "stableConnection: Resetting the backoff time to: " + str(self._baseReconnectTimeSecond) + " sec.") + self._currentBackoffTimeSecond = self._baseReconnectTimeSecond + + +class SigV4Core: + + _logger = logging.getLogger(__name__) + + def __init__(self): + self._aws_access_key_id = "" + self._aws_secret_access_key = "" + self._aws_session_token = "" + self._credentialConfigFilePath = "~/.aws/credentials" + + def setIAMCredentials(self, srcAWSAccessKeyID, srcAWSSecretAccessKey, srcAWSSessionToken): + self._aws_access_key_id = srcAWSAccessKeyID + self._aws_secret_access_key = srcAWSSecretAccessKey + self._aws_session_token = srcAWSSessionToken + + def _createAmazonDate(self): + # Returned as a unicode string in Py3.x + amazonDate = [] + currentTime = datetime.utcnow() + YMDHMS = currentTime.strftime('%Y%m%dT%H%M%SZ') + YMD = YMDHMS[0:YMDHMS.index('T')] + amazonDate.append(YMD) + amazonDate.append(YMDHMS) + return amazonDate + + def _sign(self, key, message): + # Returned as a utf-8 byte string in Py3.x + return hmac.new(key, message.encode('utf-8'), hashlib.sha256).digest() + + def _getSignatureKey(self, key, dateStamp, regionName, serviceName): + # Returned as a utf-8 byte string in Py3.x + kDate = self._sign(('AWS4' + key).encode('utf-8'), dateStamp) + kRegion = self._sign(kDate, regionName) + kService = self._sign(kRegion, serviceName) + kSigning = self._sign(kService, 'aws4_request') + return kSigning + + def _checkIAMCredentials(self): + # Check custom config + ret = self._checkKeyInCustomConfig() + # Check environment variables + if not ret: + ret = self._checkKeyInEnv() + # Check files + if not ret: + ret = self._checkKeyInFiles() + # All credentials returned as unicode strings in Py3.x + return ret + + def _checkKeyInEnv(self): + ret = dict() + self._aws_access_key_id = os.environ.get('AWS_ACCESS_KEY_ID') + self._aws_secret_access_key = os.environ.get('AWS_SECRET_ACCESS_KEY') + self._aws_session_token = os.environ.get('AWS_SESSION_TOKEN') + if self._aws_access_key_id is not None and self._aws_secret_access_key is not None: + ret["aws_access_key_id"] = self._aws_access_key_id + ret["aws_secret_access_key"] = self._aws_secret_access_key + # We do not necessarily need session token... + if self._aws_session_token is not None: + ret["aws_session_token"] = self._aws_session_token + self._logger.debug("IAM credentials from env var.") + return ret + + def _checkKeyInINIDefault(self, srcConfigParser, sectionName): + ret = dict() + # Check aws_access_key_id and aws_secret_access_key + try: + ret["aws_access_key_id"] = srcConfigParser.get(sectionName, "aws_access_key_id") + ret["aws_secret_access_key"] = srcConfigParser.get(sectionName, "aws_secret_access_key") + except NoOptionError: + self._logger.warn("Cannot find IAM keyID/secretKey in credential file.") + # We do not continue searching if we cannot even get IAM id/secret right + if len(ret) == 2: + # Check aws_session_token, optional + try: + ret["aws_session_token"] = srcConfigParser.get(sectionName, "aws_session_token") + except NoOptionError: + self._logger.debug("No AWS Session Token found.") + return ret + + def _checkKeyInFiles(self): + credentialFile = None + credentialConfig = None + ret = dict() + # Should be compatible with aws cli default credential configuration + # *NIX/Windows + try: + # See if we get the file + credentialConfig = ConfigParser() + credentialFilePath = os.path.expanduser(self._credentialConfigFilePath) # Is it compatible with windows? \/ + credentialConfig.read(credentialFilePath) + # Now we have the file, start looking for credentials... + # 'default' section + ret = self._checkKeyInINIDefault(credentialConfig, "default") + if not ret: + # 'DEFAULT' section + ret = self._checkKeyInINIDefault(credentialConfig, "DEFAULT") + self._logger.debug("IAM credentials from file.") + except IOError: + self._logger.debug("No IAM credential configuration file in " + credentialFilePath) + except NoSectionError: + self._logger.error("Cannot find IAM 'default' section.") + return ret + + def _checkKeyInCustomConfig(self): + ret = dict() + if self._aws_access_key_id != "" and self._aws_secret_access_key != "": + ret["aws_access_key_id"] = self._aws_access_key_id + ret["aws_secret_access_key"] = self._aws_secret_access_key + # We do not necessarily need session token... + if self._aws_session_token != "": + ret["aws_session_token"] = self._aws_session_token + self._logger.debug("IAM credentials from custom config.") + return ret + + def createWebsocketEndpoint(self, host, port, region, method, awsServiceName, path): + # Return the endpoint as unicode string in 3.x + # Gather all the facts + amazonDate = self._createAmazonDate() + amazonDateSimple = amazonDate[0] # Unicode in 3.x + amazonDateComplex = amazonDate[1] # Unicode in 3.x + allKeys = self._checkIAMCredentials() # Unicode in 3.x + if not self._hasCredentialsNecessaryForWebsocket(allKeys): + raise wssNoKeyInEnvironmentError() + else: + # Because of self._hasCredentialsNecessaryForWebsocket(...), keyID and secretKey should not be None from here + keyID = allKeys["aws_access_key_id"] + secretKey = allKeys["aws_secret_access_key"] + # amazonDateSimple and amazonDateComplex are guaranteed not to be None + queryParameters = "X-Amz-Algorithm=AWS4-HMAC-SHA256" + \ + "&X-Amz-Credential=" + keyID + "%2F" + amazonDateSimple + "%2F" + region + "%2F" + awsServiceName + "%2Faws4_request" + \ + "&X-Amz-Date=" + amazonDateComplex + \ + "&X-Amz-Expires=86400" + \ + "&X-Amz-SignedHeaders=host" # Unicode in 3.x + hashedPayload = hashlib.sha256(str("").encode('utf-8')).hexdigest() # Unicode in 3.x + # Create the string to sign + signedHeaders = "host" + canonicalHeaders = "host:" + host + "\n" + canonicalRequest = method + "\n" + path + "\n" + queryParameters + "\n" + canonicalHeaders + "\n" + signedHeaders + "\n" + hashedPayload # Unicode in 3.x + hashedCanonicalRequest = hashlib.sha256(str(canonicalRequest).encode('utf-8')).hexdigest() # Unicoede in 3.x + stringToSign = "AWS4-HMAC-SHA256\n" + amazonDateComplex + "\n" + amazonDateSimple + "/" + region + "/" + awsServiceName + "/aws4_request\n" + hashedCanonicalRequest # Unicode in 3.x + # Sign it + signingKey = self._getSignatureKey(secretKey, amazonDateSimple, region, awsServiceName) + signature = hmac.new(signingKey, (stringToSign).encode("utf-8"), hashlib.sha256).hexdigest() + # generate url + url = "wss://" + host + ":" + str(port) + path + '?' + queryParameters + "&X-Amz-Signature=" + signature + # See if we have STS token, if we do, add it + awsSessionTokenCandidate = allKeys.get("aws_session_token") + if awsSessionTokenCandidate is not None and len(awsSessionTokenCandidate) != 0: + aws_session_token = allKeys["aws_session_token"] + url += "&X-Amz-Security-Token=" + quote(aws_session_token.encode("utf-8")) # Unicode in 3.x + self._logger.debug("createWebsocketEndpoint: Websocket URL: " + url) + return url + + def _hasCredentialsNecessaryForWebsocket(self, allKeys): + awsAccessKeyIdCandidate = allKeys.get("aws_access_key_id") + awsSecretAccessKeyCandidate = allKeys.get("aws_secret_access_key") + # None value is NOT considered as valid entries + validEntries = awsAccessKeyIdCandidate is not None and awsAccessKeyIdCandidate is not None + if validEntries: + # Empty value is NOT considered as valid entries + validEntries &= (len(awsAccessKeyIdCandidate) != 0 and len(awsSecretAccessKeyCandidate) != 0) + return validEntries + + +# This is an internal class that buffers the incoming bytes into an +# internal buffer until it gets the full desired length of bytes. +# At that time, this bufferedReader will be reset. +# *Error handling: +# For retry errors (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE, EAGAIN), +# leave them to the paho _packet_read for further handling (ignored and try +# again when data is available. +# For other errors, leave them to the paho _packet_read for error reporting. + + +class _BufferedReader: + _sslSocket = None + _internalBuffer = None + _remainedLength = -1 + _bufferingInProgress = False + + def __init__(self, sslSocket): + self._sslSocket = sslSocket + self._internalBuffer = bytearray() + self._bufferingInProgress = False + + def _reset(self): + self._internalBuffer = bytearray() + self._remainedLength = -1 + self._bufferingInProgress = False + + def read(self, numberOfBytesToBeBuffered): + if not self._bufferingInProgress: # If last read is completed... + self._remainedLength = numberOfBytesToBeBuffered + self._bufferingInProgress = True # Now we start buffering a new length of bytes + + while self._remainedLength > 0: # Read in a loop, always try to read in the remained length + # If the data is temporarily not available, socket.error will be raised and catched by paho + dataChunk = self._sslSocket.read(self._remainedLength) + # There is a chance where the server terminates the connection without closing the socket. + # If that happens, let's raise an exception and enter the reconnect flow. + if not dataChunk: + raise socket.error(errno.ECONNABORTED, 0) + self._internalBuffer.extend(dataChunk) # Buffer the data + self._remainedLength -= len(dataChunk) # Update the remained length + + # The requested length of bytes is buffered, recover the context and return it + # Otherwise error should be raised + ret = self._internalBuffer + self._reset() + return ret # This should always be bytearray + + +# This is the internal class that sends requested data out chunk by chunk according +# to the availablity of the socket write operation. If the requested bytes of data +# (after encoding) needs to be sent out in separate socket write operations (most +# probably be interrupted by the error socket.error (errno = ssl.SSL_ERROR_WANT_WRITE).) +# , the write pointer is stored to ensure that the continued bytes will be sent next +# time this function gets called. +# *Error handling: +# For retry errors (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE, EAGAIN), +# leave them to the paho _packet_read for further handling (ignored and try +# again when data is available. +# For other errors, leave them to the paho _packet_read for error reporting. + + +class _BufferedWriter: + _sslSocket = None + _internalBuffer = None + _writingInProgress = False + _requestedDataLength = -1 + + def __init__(self, sslSocket): + self._sslSocket = sslSocket + self._internalBuffer = bytearray() + self._writingInProgress = False + self._requestedDataLength = -1 + + def _reset(self): + self._internalBuffer = bytearray() + self._writingInProgress = False + self._requestedDataLength = -1 + + # Input data for this function needs to be an encoded wss frame + # Always request for packet[pos=0:] (raw MQTT data) + def write(self, encodedData, payloadLength): + # encodedData should always be bytearray + # Check if we have a frame that is partially sent + if not self._writingInProgress: + self._internalBuffer = encodedData + self._writingInProgress = True + self._requestedDataLength = payloadLength + # Now, write as much as we can + lengthWritten = self._sslSocket.write(self._internalBuffer) + self._internalBuffer = self._internalBuffer[lengthWritten:] + # This MQTT packet has been sent out in a wss frame, completely + if len(self._internalBuffer) == 0: + ret = self._requestedDataLength + self._reset() + return ret + # This socket write is half-baked... + else: + return 0 # Ensure that the 'pos' inside the MQTT packet never moves since we have not finished the transmission of this encoded frame + + +class SecuredWebSocketCore: + # Websocket Constants + _OP_CONTINUATION = 0x0 + _OP_TEXT = 0x1 + _OP_BINARY = 0x2 + _OP_CONNECTION_CLOSE = 0x8 + _OP_PING = 0x9 + _OP_PONG = 0xa + # Websocket Connect Status + _WebsocketConnectInit = -1 + _WebsocketDisconnected = 1 + + _logger = logging.getLogger(__name__) + + def __init__(self, socket, hostAddress, portNumber, AWSAccessKeyID="", AWSSecretAccessKey="", AWSSessionToken=""): + self._connectStatus = self._WebsocketConnectInit + # Handlers + self._sslSocket = socket + self._sigV4Handler = self._createSigV4Core() + self._sigV4Handler.setIAMCredentials(AWSAccessKeyID, AWSSecretAccessKey, AWSSessionToken) + # Endpoint Info + self._hostAddress = hostAddress + self._portNumber = portNumber + # Section Flags + self._hasOpByte = False + self._hasPayloadLengthFirst = False + self._hasPayloadLengthExtended = False + self._hasMaskKey = False + self._hasPayload = False + # Properties for current websocket frame + self._isFIN = False + self._RSVBits = None + self._opCode = None + self._needMaskKey = False + self._payloadLengthBytesLength = 1 + self._payloadLength = 0 + self._maskKey = None + self._payloadDataBuffer = bytearray() # Once the whole wss connection is lost, there is no need to keep the buffered payload + try: + self._handShake(hostAddress, portNumber) + except wssNoKeyInEnvironmentError: # Handle SigV4 signing and websocket handshaking errors + raise ValueError("No Access Key/KeyID Error") + except wssHandShakeError: + raise ValueError("Websocket Handshake Error") + except ClientError as e: + raise ValueError(e.message) + # Now we have a socket with secured websocket... + self._bufferedReader = _BufferedReader(self._sslSocket) + self._bufferedWriter = _BufferedWriter(self._sslSocket) + + def _createSigV4Core(self): + return SigV4Core() + + def _generateMaskKey(self): + return bytearray(os.urandom(4)) + # os.urandom returns ascii str in 2.x, converted to bytearray + # os.urandom returns bytes in 3.x, converted to bytearray + + def _reset(self): # Reset the context for wss frame reception + # Control info + self._hasOpByte = False + self._hasPayloadLengthFirst = False + self._hasPayloadLengthExtended = False + self._hasMaskKey = False + self._hasPayload = False + # Frame Info + self._isFIN = False + self._RSVBits = None + self._opCode = None + self._needMaskKey = False + self._payloadLengthBytesLength = 1 + self._payloadLength = 0 + self._maskKey = None + # Never reset the payloadData since we might have fragmented MQTT data from the pervious frame + + def _generateWSSKey(self): + return base64.b64encode(os.urandom(128)) # Bytes + + def _verifyWSSResponse(self, response, clientKey): + # Check if it is a 101 response + rawResponse = response.strip().lower() + if b"101 switching protocols" not in rawResponse or b"upgrade: websocket" not in rawResponse or b"connection: upgrade" not in rawResponse: + return False + # Parse out the sec-websocket-accept + WSSAcceptKeyIndex = response.strip().index(b"sec-websocket-accept: ") + len(b"sec-websocket-accept: ") + rawSecWebSocketAccept = response.strip()[WSSAcceptKeyIndex:].split(b"\r\n")[0].strip() + # Verify the WSSAcceptKey + return self._verifyWSSAcceptKey(rawSecWebSocketAccept, clientKey) + + def _verifyWSSAcceptKey(self, srcAcceptKey, clientKey): + GUID = b"258EAFA5-E914-47DA-95CA-C5AB0DC85B11" + verifyServerAcceptKey = base64.b64encode((hashlib.sha1(clientKey + GUID)).digest()) # Bytes + return srcAcceptKey == verifyServerAcceptKey + + def _handShake(self, hostAddress, portNumber): + CRLF = "\r\n" + IOT_ENDPOINT_PATTERN = r"^[0-9a-zA-Z]+(\.ats|-ats)?\.iot\.(.*)\.amazonaws\..*" + matched = re.compile(IOT_ENDPOINT_PATTERN, re.IGNORECASE).match(hostAddress) + if not matched: + raise ClientError("Invalid endpoint pattern for wss: %s" % hostAddress) + region = matched.group(2) + signedURL = self._sigV4Handler.createWebsocketEndpoint(hostAddress, portNumber, region, "GET", "iotdata", "/mqtt") + # Now we got a signedURL + path = signedURL[signedURL.index("/mqtt"):] + # Assemble HTTP request headers + Method = "GET " + path + " HTTP/1.1" + CRLF + Host = "Host: " + hostAddress + CRLF + Connection = "Connection: " + "Upgrade" + CRLF + Upgrade = "Upgrade: " + "websocket" + CRLF + secWebSocketVersion = "Sec-WebSocket-Version: " + "13" + CRLF + rawSecWebSocketKey = self._generateWSSKey() # Bytes + secWebSocketKey = "sec-websocket-key: " + rawSecWebSocketKey.decode('utf-8') + CRLF # Should be randomly generated... + secWebSocketProtocol = "Sec-WebSocket-Protocol: " + "mqttv3.1" + CRLF + secWebSocketExtensions = "Sec-WebSocket-Extensions: " + "permessage-deflate; client_max_window_bits" + CRLF + # Send the HTTP request + # Ensure that we are sending bytes, not by any chance unicode string + handshakeBytes = Method + Host + Connection + Upgrade + secWebSocketVersion + secWebSocketProtocol + secWebSocketExtensions + secWebSocketKey + CRLF + handshakeBytes = handshakeBytes.encode('utf-8') + self._sslSocket.write(handshakeBytes) + # Read it back (Non-blocking socket) + timeStart = time.time() + wssHandshakeResponse = bytearray() + while len(wssHandshakeResponse) == 0: + try: + wssHandshakeResponse += self._sslSocket.read(1024) # Response is always less than 1024 bytes + except socket.error as err: + if err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE: + if time.time() - timeStart > self._getTimeoutSec(): + raise err # We make sure that reconnect gets retried in Paho upon a wss reconnect response timeout + else: + raise err + # Verify response + # Now both wssHandshakeResponse and rawSecWebSocketKey are byte strings + if not self._verifyWSSResponse(wssHandshakeResponse, rawSecWebSocketKey): + raise wssHandShakeError() + else: + pass + + def _getTimeoutSec(self): + return DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC + + # Used to create a single wss frame + # Assume that the maximum length of a MQTT packet never exceeds the maximum length + # for a wss frame. Therefore, the FIN bit for the encoded frame will always be 1. + # Frames are encoded as BINARY frames. + def _encodeFrame(self, rawPayload, opCode, masked=1): + ret = bytearray() + # Op byte + opByte = 0x80 | opCode # Always a FIN, no RSV bits + ret.append(opByte) + # Payload Length bytes + maskBit = masked + payloadLength = len(rawPayload) + if payloadLength <= 125: + ret.append((maskBit << 7) | payloadLength) + elif payloadLength <= 0xffff: # 16-bit unsigned int + ret.append((maskBit << 7) | 126) + ret.extend(struct.pack("!H", payloadLength)) + elif payloadLength <= 0x7fffffffffffffff: # 64-bit unsigned int (most significant bit must be 0) + ret.append((maskBit << 7) | 127) + ret.extend(struct.pack("!Q", payloadLength)) + else: # Overflow + raise ValueError("Exceeds the maximum number of bytes for a single websocket frame.") + if maskBit == 1: + # Mask key bytes + maskKey = self._generateMaskKey() + ret.extend(maskKey) + # Mask the payload + payloadBytes = bytearray(rawPayload) + if maskBit == 1: + for i in range(0, payloadLength): + payloadBytes[i] ^= maskKey[i % 4] + ret.extend(payloadBytes) + # Return the assembled wss frame + return ret + + # Used for the wss client to close a wss connection + # Create and send a masked wss closing frame + def _closeWssConnection(self): + # Frames sent from client to server must be masked + self._sslSocket.write(self._encodeFrame(b"", self._OP_CONNECTION_CLOSE, masked=1)) + + # Used for the wss client to respond to a wss PING from server + # Create and send a masked PONG frame + def _sendPONG(self): + # Frames sent from client to server must be masked + self._sslSocket.write(self._encodeFrame(b"", self._OP_PONG, masked=1)) + + # Override sslSocket read. Always read from the wss internal payload buffer, which + # contains the masked MQTT packet. This read will decode ONE wss frame every time + # and load in the payload for MQTT _packet_read. At any time, MQTT _packet_read + # should be able to read a complete MQTT packet from the payload (buffered per wss + # frame payload). If the MQTT packet is break into separate wss frames, different + # chunks will be buffered in separate frames and MQTT _packet_read will not be able + # to collect a complete MQTT packet to operate on until the necessary payload is + # fully buffered. + # If the requested number of bytes are not available, SSL_ERROR_WANT_READ will be + # raised to trigger another call of _packet_read when the data is available again. + def read(self, numberOfBytes): + # Check if we have enough data for paho + # _payloadDataBuffer will not be empty ony when the payload of a new wss frame + # has been unmasked. + if len(self._payloadDataBuffer) >= numberOfBytes: + ret = self._payloadDataBuffer[0:numberOfBytes] + self._payloadDataBuffer = self._payloadDataBuffer[numberOfBytes:] + # struct.unpack(fmt, string) # Py2.x + # struct.unpack(fmt, buffer) # Py3.x + # Here ret is always in bytes (buffer interface) + if sys.version_info[0] < 3: # Py2.x + ret = str(ret) + return ret + # Emmm, We don't. Try to buffer from the socket (It's a new wss frame). + if not self._hasOpByte: # Check if we need to buffer OpByte + opByte = self._bufferedReader.read(1) + self._isFIN = (opByte[0] & 0x80) == 0x80 + self._RSVBits = (opByte[0] & 0x70) + self._opCode = (opByte[0] & 0x0f) + self._hasOpByte = True # Finished buffering opByte + # Check if any of the RSV bits are set, if so, close the connection + # since client never sends negotiated extensions + if self._RSVBits != 0x0: + self._closeWssConnection() + self._connectStatus = self._WebsocketDisconnected + self._payloadDataBuffer = bytearray() + raise socket.error(ssl.SSL_ERROR_WANT_READ, "RSV bits set with NO negotiated extensions.") + if not self._hasPayloadLengthFirst: # Check if we need to buffer First Payload Length byte + payloadLengthFirst = self._bufferedReader.read(1) + self._hasPayloadLengthFirst = True # Finished buffering first byte of payload length + self._needMaskKey = (payloadLengthFirst[0] & 0x80) == 0x80 + payloadLengthFirstByteArray = bytearray() + payloadLengthFirstByteArray.extend(payloadLengthFirst) + self._payloadLength = (payloadLengthFirstByteArray[0] & 0x7f) + + if self._payloadLength == 126: + self._payloadLengthBytesLength = 2 + self._hasPayloadLengthExtended = False # Force to buffer the extended + elif self._payloadLength == 127: + self._payloadLengthBytesLength = 8 + self._hasPayloadLengthExtended = False # Force to buffer the extended + else: # _payloadLength <= 125: + self._hasPayloadLengthExtended = True # No need to buffer extended payload length + if not self._hasPayloadLengthExtended: # Check if we need to buffer Extended Payload Length bytes + payloadLengthExtended = self._bufferedReader.read(self._payloadLengthBytesLength) + self._hasPayloadLengthExtended = True + if sys.version_info[0] < 3: + payloadLengthExtended = str(payloadLengthExtended) + if self._payloadLengthBytesLength == 2: + self._payloadLength = struct.unpack("!H", payloadLengthExtended)[0] + else: # _payloadLengthBytesLength == 8 + self._payloadLength = struct.unpack("!Q", payloadLengthExtended)[0] + + if self._needMaskKey: # Response from server is masked, close the connection + self._closeWssConnection() + self._connectStatus = self._WebsocketDisconnected + self._payloadDataBuffer = bytearray() + raise socket.error(ssl.SSL_ERROR_WANT_READ, "Server response masked, closing connection and try again.") + + if not self._hasPayload: # Check if we need to buffer the payload + payloadForThisFrame = self._bufferedReader.read(self._payloadLength) + self._hasPayload = True + # Client side should never received a masked packet from the server side + # Unmask it as needed + #if self._needMaskKey: + # for i in range(0, self._payloadLength): + # payloadForThisFrame[i] ^= self._maskKey[i % 4] + # Append it to the internal payload buffer + self._payloadDataBuffer.extend(payloadForThisFrame) + # Now we have the complete wss frame, reset the context + # Check to see if it is a wss closing frame + if self._opCode == self._OP_CONNECTION_CLOSE: + self._connectStatus = self._WebsocketDisconnected + self._payloadDataBuffer = bytearray() # Ensure that once the wss closing frame comes, we have nothing to read and start all over again + # Check to see if it is a wss PING frame + if self._opCode == self._OP_PING: + self._sendPONG() # Nothing more to do here, if the transmission of the last wssMQTT packet is not finished, it will continue + self._reset() + # Check again if we have enough data for paho + if len(self._payloadDataBuffer) >= numberOfBytes: + ret = self._payloadDataBuffer[0:numberOfBytes] + self._payloadDataBuffer = self._payloadDataBuffer[numberOfBytes:] + # struct.unpack(fmt, string) # Py2.x + # struct.unpack(fmt, buffer) # Py3.x + # Here ret is always in bytes (buffer interface) + if sys.version_info[0] < 3: # Py2.x + ret = str(ret) + return ret + else: # Fragmented MQTT packets in separate wss frames + raise socket.error(ssl.SSL_ERROR_WANT_READ, "Not a complete MQTT packet payload within this wss frame.") + + def write(self, bytesToBeSent): + # When there is a disconnection, select will report a TypeError which triggers the reconnect. + # In reconnect, Paho will set the socket object (mocked by wss) to None, blocking other ops + # before a connection is re-established. + # This 'low-level' socket write op should always be able to write to plain socket. + # Error reporting is performed by Python socket itself. + # Wss closing frame handling is performed in the wss read. + return self._bufferedWriter.write(self._encodeFrame(bytesToBeSent, self._OP_BINARY, 1), len(bytesToBeSent)) + + def close(self): + if self._sslSocket is not None: + self._sslSocket.close() + self._sslSocket = None + + def getpeercert(self): + return self._sslSocket.getpeercert() + + def getSSLSocket(self): + if self._connectStatus != self._WebsocketDisconnected: + return self._sslSocket + else: + return None # Leave the sslSocket to Paho to close it. (_ssl.close() -> wssCore.close()) diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/clients.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/clients.py new file mode 100644 index 0000000..bb670f7 --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/clients.py @@ -0,0 +1,244 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import ssl +import logging +from threading import Lock +from numbers import Number +import AWSIoTPythonSDK.core.protocol.paho.client as mqtt +from AWSIoTPythonSDK.core.protocol.paho.client import MQTT_ERR_SUCCESS +from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids + + +class ClientStatus(object): + + IDLE = 0 + CONNECT = 1 + RESUBSCRIBE = 2 + DRAINING = 3 + STABLE = 4 + USER_DISCONNECT = 5 + ABNORMAL_DISCONNECT = 6 + + +class ClientStatusContainer(object): + + def __init__(self): + self._status = ClientStatus.IDLE + + def get_status(self): + return self._status + + def set_status(self, status): + if ClientStatus.USER_DISCONNECT == self._status: # If user requests to disconnect, no status updates other than user connect + if ClientStatus.CONNECT == status: + self._status = status + else: + self._status = status + + +class InternalAsyncMqttClient(object): + + _logger = logging.getLogger(__name__) + + def __init__(self, client_id, clean_session, protocol, use_wss): + self._paho_client = self._create_paho_client(client_id, clean_session, None, protocol, use_wss) + self._use_wss = use_wss + self._event_callback_map_lock = Lock() + self._event_callback_map = dict() + + def _create_paho_client(self, client_id, clean_session, user_data, protocol, use_wss): + self._logger.debug("Initializing MQTT layer...") + return mqtt.Client(client_id, clean_session, user_data, protocol, use_wss) + + # TODO: Merge credentials providers configuration into one + def set_cert_credentials_provider(self, cert_credentials_provider): + # History issue from Yun SDK where AR9331 embedded Linux only have Python 2.7.3 + # pre-installed. In this version, TLSv1_2 is not even an option. + # SSLv23 is a work-around which selects the highest TLS version between the client + # and service. If user installs opensslv1.0.1+, this option will work fine for Mutual + # Auth. + # Note that we cannot force TLSv1.2 for Mutual Auth. in Python 2.7.3 and TLS support + # in Python only starts from Python2.7. + # See also: https://docs.python.org/2/library/ssl.html#ssl.PROTOCOL_SSLv23 + if self._use_wss: + ca_path = cert_credentials_provider.get_ca_path() + self._paho_client.tls_set(ca_certs=ca_path, cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_SSLv23) + else: + ca_path = cert_credentials_provider.get_ca_path() + cert_path = cert_credentials_provider.get_cert_path() + key_path = cert_credentials_provider.get_key_path() + self._paho_client.tls_set(ca_certs=ca_path,certfile=cert_path, keyfile=key_path, + cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_SSLv23) + + def set_iam_credentials_provider(self, iam_credentials_provider): + self._paho_client.configIAMCredentials(iam_credentials_provider.get_access_key_id(), + iam_credentials_provider.get_secret_access_key(), + iam_credentials_provider.get_session_token()) + + def set_endpoint_provider(self, endpoint_provider): + self._endpoint_provider = endpoint_provider + + def configure_last_will(self, topic, payload, qos, retain=False): + self._paho_client.will_set(topic, payload, qos, retain) + + def configure_alpn_protocols(self, alpn_protocols): + self._paho_client.config_alpn_protocols(alpn_protocols) + + def clear_last_will(self): + self._paho_client.will_clear() + + def set_username_password(self, username, password=None): + self._paho_client.username_pw_set(username, password) + + def set_socket_factory(self, socket_factory): + self._paho_client.socket_factory_set(socket_factory) + + def configure_reconnect_back_off(self, base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec): + self._paho_client.setBackoffTiming(base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec) + + def connect(self, keep_alive_sec, ack_callback=None): + host = self._endpoint_provider.get_host() + port = self._endpoint_provider.get_port() + + with self._event_callback_map_lock: + self._logger.debug("Filling in fixed event callbacks: CONNACK, DISCONNECT, MESSAGE") + self._event_callback_map[FixedEventMids.CONNACK_MID] = self._create_combined_on_connect_callback(ack_callback) + self._event_callback_map[FixedEventMids.DISCONNECT_MID] = self._create_combined_on_disconnect_callback(None) + self._event_callback_map[FixedEventMids.MESSAGE_MID] = self._create_converted_on_message_callback() + + rc = self._paho_client.connect(host, port, keep_alive_sec) + if MQTT_ERR_SUCCESS == rc: + self.start_background_network_io() + + return rc + + def start_background_network_io(self): + self._logger.debug("Starting network I/O thread...") + self._paho_client.loop_start() + + def stop_background_network_io(self): + self._logger.debug("Stopping network I/O thread...") + self._paho_client.loop_stop() + + def disconnect(self, ack_callback=None): + with self._event_callback_map_lock: + rc = self._paho_client.disconnect() + if MQTT_ERR_SUCCESS == rc: + self._logger.debug("Filling in custom disconnect event callback...") + combined_on_disconnect_callback = self._create_combined_on_disconnect_callback(ack_callback) + self._event_callback_map[FixedEventMids.DISCONNECT_MID] = combined_on_disconnect_callback + return rc + + def _create_combined_on_connect_callback(self, ack_callback): + def combined_on_connect_callback(mid, data): + self.on_online() + if ack_callback: + ack_callback(mid, data) + return combined_on_connect_callback + + def _create_combined_on_disconnect_callback(self, ack_callback): + def combined_on_disconnect_callback(mid, data): + self.on_offline() + if ack_callback: + ack_callback(mid, data) + return combined_on_disconnect_callback + + def _create_converted_on_message_callback(self): + def converted_on_message_callback(mid, data): + self.on_message(data) + return converted_on_message_callback + + # For client online notification + def on_online(self): + pass + + # For client offline notification + def on_offline(self): + pass + + # For client message reception notification + def on_message(self, message): + pass + + def publish(self, topic, payload, qos, retain=False, ack_callback=None): + with self._event_callback_map_lock: + rc, mid = self._paho_client.publish(topic, payload, qos, retain) + if MQTT_ERR_SUCCESS == rc and qos > 0 and ack_callback: + self._logger.debug("Filling in custom puback (QoS>0) event callback...") + self._event_callback_map[mid] = ack_callback + return rc, mid + + def subscribe(self, topic, qos, ack_callback=None): + with self._event_callback_map_lock: + rc, mid = self._paho_client.subscribe(topic, qos) + if MQTT_ERR_SUCCESS == rc and ack_callback: + self._logger.debug("Filling in custom suback event callback...") + self._event_callback_map[mid] = ack_callback + return rc, mid + + def unsubscribe(self, topic, ack_callback=None): + with self._event_callback_map_lock: + rc, mid = self._paho_client.unsubscribe(topic) + if MQTT_ERR_SUCCESS == rc and ack_callback: + self._logger.debug("Filling in custom unsuback event callback...") + self._event_callback_map[mid] = ack_callback + return rc, mid + + def register_internal_event_callbacks(self, on_connect, on_disconnect, on_publish, on_subscribe, on_unsubscribe, on_message): + self._logger.debug("Registering internal event callbacks to MQTT layer...") + self._paho_client.on_connect = on_connect + self._paho_client.on_disconnect = on_disconnect + self._paho_client.on_publish = on_publish + self._paho_client.on_subscribe = on_subscribe + self._paho_client.on_unsubscribe = on_unsubscribe + self._paho_client.on_message = on_message + + def unregister_internal_event_callbacks(self): + self._logger.debug("Unregistering internal event callbacks from MQTT layer...") + self._paho_client.on_connect = None + self._paho_client.on_disconnect = None + self._paho_client.on_publish = None + self._paho_client.on_subscribe = None + self._paho_client.on_unsubscribe = None + self._paho_client.on_message = None + + def invoke_event_callback(self, mid, data=None): + with self._event_callback_map_lock: + event_callback = self._event_callback_map.get(mid) + # For invoking the event callback, we do not need to acquire the lock + if event_callback: + self._logger.debug("Invoking custom event callback...") + if data is not None: + event_callback(mid=mid, data=data) + else: + event_callback(mid=mid) + if isinstance(mid, Number): # Do NOT remove callbacks for CONNACK/DISCONNECT/MESSAGE + self._logger.debug("This custom event callback is for pub/sub/unsub, removing it after invocation...") + with self._event_callback_map_lock: + del self._event_callback_map[mid] + + def remove_event_callback(self, mid): + with self._event_callback_map_lock: + if mid in self._event_callback_map: + self._logger.debug("Removing custom event callback...") + del self._event_callback_map[mid] + + def clean_up_event_callbacks(self): + with self._event_callback_map_lock: + self._event_callback_map.clear() + + def get_event_callback_map(self): + return self._event_callback_map diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/defaults.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/defaults.py new file mode 100644 index 0000000..66817d3 --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/defaults.py @@ -0,0 +1,20 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC = 30 +DEFAULT_OPERATION_TIMEOUT_SEC = 5 +DEFAULT_DRAINING_INTERNAL_SEC = 0.5 +METRICS_PREFIX = "?SDK=Python&Version=" +ALPN_PROTCOLS = "x-amzn-mqtt-ca" \ No newline at end of file diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/events.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/events.py new file mode 100644 index 0000000..90f0b70 --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/events.py @@ -0,0 +1,29 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +class EventTypes(object): + CONNACK = 0 + DISCONNECT = 1 + PUBACK = 2 + SUBACK = 3 + UNSUBACK = 4 + MESSAGE = 5 + + +class FixedEventMids(object): + CONNACK_MID = "CONNECTED" + DISCONNECT_MID = "DISCONNECTED" + MESSAGE_MID = "MESSAGE" + QUEUED_MID = "QUEUED" diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/queues.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/queues.py new file mode 100644 index 0000000..77046a8 --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/queues.py @@ -0,0 +1,87 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import logging +from AWSIoTPythonSDK.core.util.enums import DropBehaviorTypes + + +class AppendResults(object): + APPEND_FAILURE_QUEUE_FULL = -1 + APPEND_FAILURE_QUEUE_DISABLED = -2 + APPEND_SUCCESS = 0 + + +class OfflineRequestQueue(list): + _logger = logging.getLogger(__name__) + + def __init__(self, max_size, drop_behavior=DropBehaviorTypes.DROP_NEWEST): + if not isinstance(max_size, int) or not isinstance(drop_behavior, int): + self._logger.error("init: MaximumSize/DropBehavior must be integer.") + raise TypeError("MaximumSize/DropBehavior must be integer.") + if drop_behavior != DropBehaviorTypes.DROP_OLDEST and drop_behavior != DropBehaviorTypes.DROP_NEWEST: + self._logger.error("init: Drop behavior not supported.") + raise ValueError("Drop behavior not supported.") + + list.__init__([]) + self._drop_behavior = drop_behavior + # When self._maximumSize > 0, queue is limited + # When self._maximumSize == 0, queue is disabled + # When self._maximumSize < 0. queue is infinite + self._max_size = max_size + + def _is_enabled(self): + return self._max_size != 0 + + def _need_drop_messages(self): + # Need to drop messages when: + # 1. Queue is limited and full + # 2. Queue is disabled + is_queue_full = len(self) >= self._max_size + is_queue_limited = self._max_size > 0 + is_queue_disabled = not self._is_enabled() + return (is_queue_full and is_queue_limited) or is_queue_disabled + + def set_behavior_drop_newest(self): + self._drop_behavior = DropBehaviorTypes.DROP_NEWEST + + def set_behavior_drop_oldest(self): + self._drop_behavior = DropBehaviorTypes.DROP_OLDEST + + # Override + # Append to a queue with a limited size. + # Return APPEND_SUCCESS if the append is successful + # Return APPEND_FAILURE_QUEUE_FULL if the append failed because the queue is full + # Return APPEND_FAILURE_QUEUE_DISABLED if the append failed because the queue is disabled + def append(self, data): + ret = AppendResults.APPEND_SUCCESS + if self._is_enabled(): + if self._need_drop_messages(): + # We should drop the newest + if DropBehaviorTypes.DROP_NEWEST == self._drop_behavior: + self._logger.warn("append: Full queue. Drop the newest: " + str(data)) + ret = AppendResults.APPEND_FAILURE_QUEUE_FULL + # We should drop the oldest + else: + current_oldest = super(OfflineRequestQueue, self).pop(0) + self._logger.warn("append: Full queue. Drop the oldest: " + str(current_oldest)) + super(OfflineRequestQueue, self).append(data) + ret = AppendResults.APPEND_FAILURE_QUEUE_FULL + else: + self._logger.debug("append: Add new element: " + str(data)) + super(OfflineRequestQueue, self).append(data) + else: + self._logger.debug("append: Queue is disabled. Drop the message: " + str(data)) + ret = AppendResults.APPEND_FAILURE_QUEUE_DISABLED + return ret diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/requests.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/requests.py new file mode 100644 index 0000000..bd2585d --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/requests.py @@ -0,0 +1,27 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +class RequestTypes(object): + CONNECT = 0 + DISCONNECT = 1 + PUBLISH = 2 + SUBSCRIBE = 3 + UNSUBSCRIBE = 4 + +class QueueableRequest(object): + + def __init__(self, type, data): + self.type = type + self.data = data # Can be a tuple diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/workers.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/workers.py new file mode 100644 index 0000000..e52db3f --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/internal/workers.py @@ -0,0 +1,296 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import time +import logging +from threading import Thread +from threading import Event +from AWSIoTPythonSDK.core.protocol.internal.events import EventTypes +from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids +from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatus +from AWSIoTPythonSDK.core.protocol.internal.queues import OfflineRequestQueue +from AWSIoTPythonSDK.core.protocol.internal.requests import RequestTypes +from AWSIoTPythonSDK.core.protocol.paho.client import topic_matches_sub +from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_DRAINING_INTERNAL_SEC + + +class EventProducer(object): + + _logger = logging.getLogger(__name__) + + def __init__(self, cv, event_queue): + self._cv = cv + self._event_queue = event_queue + + def on_connect(self, client, user_data, flags, rc): + self._add_to_queue(FixedEventMids.CONNACK_MID, EventTypes.CONNACK, rc) + self._logger.debug("Produced [connack] event") + + def on_disconnect(self, client, user_data, rc): + self._add_to_queue(FixedEventMids.DISCONNECT_MID, EventTypes.DISCONNECT, rc) + self._logger.debug("Produced [disconnect] event") + + def on_publish(self, client, user_data, mid): + self._add_to_queue(mid, EventTypes.PUBACK, None) + self._logger.debug("Produced [puback] event") + + def on_subscribe(self, client, user_data, mid, granted_qos): + self._add_to_queue(mid, EventTypes.SUBACK, granted_qos) + self._logger.debug("Produced [suback] event") + + def on_unsubscribe(self, client, user_data, mid): + self._add_to_queue(mid, EventTypes.UNSUBACK, None) + self._logger.debug("Produced [unsuback] event") + + def on_message(self, client, user_data, message): + self._add_to_queue(FixedEventMids.MESSAGE_MID, EventTypes.MESSAGE, message) + self._logger.debug("Produced [message] event") + + def _add_to_queue(self, mid, event_type, data): + with self._cv: + self._event_queue.put((mid, event_type, data)) + self._cv.notify() + + +class EventConsumer(object): + + MAX_DISPATCH_INTERNAL_SEC = 0.01 + _logger = logging.getLogger(__name__) + + def __init__(self, cv, event_queue, internal_async_client, + subscription_manager, offline_requests_manager, client_status): + self._cv = cv + self._event_queue = event_queue + self._internal_async_client = internal_async_client + self._subscription_manager = subscription_manager + self._offline_requests_manager = offline_requests_manager + self._client_status = client_status + self._is_running = False + self._draining_interval_sec = DEFAULT_DRAINING_INTERNAL_SEC + self._dispatch_methods = { + EventTypes.CONNACK : self._dispatch_connack, + EventTypes.DISCONNECT : self._dispatch_disconnect, + EventTypes.PUBACK : self._dispatch_puback, + EventTypes.SUBACK : self._dispatch_suback, + EventTypes.UNSUBACK : self._dispatch_unsuback, + EventTypes.MESSAGE : self._dispatch_message + } + self._offline_request_handlers = { + RequestTypes.PUBLISH : self._handle_offline_publish, + RequestTypes.SUBSCRIBE : self._handle_offline_subscribe, + RequestTypes.UNSUBSCRIBE : self._handle_offline_unsubscribe + } + self._stopper = Event() + + def update_offline_requests_manager(self, offline_requests_manager): + self._offline_requests_manager = offline_requests_manager + + def update_draining_interval_sec(self, draining_interval_sec): + self._draining_interval_sec = draining_interval_sec + + def get_draining_interval_sec(self): + return self._draining_interval_sec + + def is_running(self): + return self._is_running + + def start(self): + self._stopper.clear() + self._is_running = True + dispatch_events = Thread(target=self._dispatch) + dispatch_events.daemon = True + dispatch_events.start() + self._logger.debug("Event consuming thread started") + + def stop(self): + if self._is_running: + self._is_running = False + self._clean_up() + self._logger.debug("Event consuming thread stopped") + + def _clean_up(self): + self._logger.debug("Cleaning up before stopping event consuming") + with self._event_queue.mutex: + self._event_queue.queue.clear() + self._logger.debug("Event queue cleared") + self._internal_async_client.stop_background_network_io() + self._logger.debug("Network thread stopped") + self._internal_async_client.clean_up_event_callbacks() + self._logger.debug("Event callbacks cleared") + + def wait_until_it_stops(self, timeout_sec): + self._logger.debug("Waiting for event consumer to completely stop") + return self._stopper.wait(timeout=timeout_sec) + + def is_fully_stopped(self): + return self._stopper.is_set() + + def _dispatch(self): + while self._is_running: + with self._cv: + if self._event_queue.empty(): + self._cv.wait(self.MAX_DISPATCH_INTERNAL_SEC) + else: + while not self._event_queue.empty(): + self._dispatch_one() + self._stopper.set() + self._logger.debug("Exiting dispatching loop...") + + def _dispatch_one(self): + mid, event_type, data = self._event_queue.get() + if mid: + self._dispatch_methods[event_type](mid, data) + self._internal_async_client.invoke_event_callback(mid, data=data) + # We need to make sure disconnect event gets dispatched and then we stop the consumer + if self._need_to_stop_dispatching(mid): + self.stop() + + def _need_to_stop_dispatching(self, mid): + status = self._client_status.get_status() + return (ClientStatus.USER_DISCONNECT == status or ClientStatus.CONNECT == status) \ + and mid == FixedEventMids.DISCONNECT_MID + + def _dispatch_connack(self, mid, rc): + status = self._client_status.get_status() + self._logger.debug("Dispatching [connack] event") + if self._need_recover(): + if ClientStatus.STABLE != status: # To avoid multiple connack dispatching + self._logger.debug("Has recovery job") + clean_up_debt = Thread(target=self._clean_up_debt) + clean_up_debt.start() + else: + self._logger.debug("No need for recovery") + self._client_status.set_status(ClientStatus.STABLE) + + def _need_recover(self): + return self._subscription_manager.list_records() or self._offline_requests_manager.has_more() + + def _clean_up_debt(self): + self._handle_resubscribe() + self._handle_draining() + self._client_status.set_status(ClientStatus.STABLE) + + def _handle_resubscribe(self): + subscriptions = self._subscription_manager.list_records() + if subscriptions and not self._has_user_disconnect_request(): + self._logger.debug("Start resubscribing") + self._client_status.set_status(ClientStatus.RESUBSCRIBE) + for topic, (qos, message_callback, ack_callback) in subscriptions: + if self._has_user_disconnect_request(): + self._logger.debug("User disconnect detected") + break + self._internal_async_client.subscribe(topic, qos, ack_callback) + + def _handle_draining(self): + if self._offline_requests_manager.has_more() and not self._has_user_disconnect_request(): + self._logger.debug("Start draining") + self._client_status.set_status(ClientStatus.DRAINING) + while self._offline_requests_manager.has_more(): + if self._has_user_disconnect_request(): + self._logger.debug("User disconnect detected") + break + offline_request = self._offline_requests_manager.get_next() + if offline_request: + self._offline_request_handlers[offline_request.type](offline_request) + time.sleep(self._draining_interval_sec) + + def _has_user_disconnect_request(self): + return ClientStatus.USER_DISCONNECT == self._client_status.get_status() + + def _dispatch_disconnect(self, mid, rc): + self._logger.debug("Dispatching [disconnect] event") + status = self._client_status.get_status() + if ClientStatus.USER_DISCONNECT == status or ClientStatus.CONNECT == status: + pass + else: + self._client_status.set_status(ClientStatus.ABNORMAL_DISCONNECT) + + # For puback, suback and unsuback, ack callback invocation is handled in dispatch_one + # Do nothing in the event dispatching itself + def _dispatch_puback(self, mid, rc): + self._logger.debug("Dispatching [puback] event") + + def _dispatch_suback(self, mid, rc): + self._logger.debug("Dispatching [suback] event") + + def _dispatch_unsuback(self, mid, rc): + self._logger.debug("Dispatching [unsuback] event") + + def _dispatch_message(self, mid, message): + self._logger.debug("Dispatching [message] event") + subscriptions = self._subscription_manager.list_records() + if subscriptions: + for topic, (qos, message_callback, _) in subscriptions: + if topic_matches_sub(topic, message.topic) and message_callback: + message_callback(None, None, message) # message_callback(client, userdata, message) + + def _handle_offline_publish(self, request): + topic, payload, qos, retain = request.data + self._internal_async_client.publish(topic, payload, qos, retain) + self._logger.debug("Processed offline publish request") + + def _handle_offline_subscribe(self, request): + topic, qos, message_callback, ack_callback = request.data + self._subscription_manager.add_record(topic, qos, message_callback, ack_callback) + self._internal_async_client.subscribe(topic, qos, ack_callback) + self._logger.debug("Processed offline subscribe request") + + def _handle_offline_unsubscribe(self, request): + topic, ack_callback = request.data + self._subscription_manager.remove_record(topic) + self._internal_async_client.unsubscribe(topic, ack_callback) + self._logger.debug("Processed offline unsubscribe request") + + +class SubscriptionManager(object): + + _logger = logging.getLogger(__name__) + + def __init__(self): + self._subscription_map = dict() + + def add_record(self, topic, qos, message_callback, ack_callback): + self._logger.debug("Adding a new subscription record: %s qos: %d", topic, qos) + self._subscription_map[topic] = qos, message_callback, ack_callback # message_callback and/or ack_callback could be None + + def remove_record(self, topic): + self._logger.debug("Removing subscription record: %s", topic) + if self._subscription_map.get(topic): # Ignore topics that are never subscribed to + del self._subscription_map[topic] + else: + self._logger.warn("Removing attempt for non-exist subscription record: %s", topic) + + def list_records(self): + return list(self._subscription_map.items()) + + +class OfflineRequestsManager(object): + + _logger = logging.getLogger(__name__) + + def __init__(self, max_size, drop_behavior): + self._queue = OfflineRequestQueue(max_size, drop_behavior) + + def has_more(self): + return len(self._queue) > 0 + + def add_one(self, request): + return self._queue.append(request) + + def get_next(self): + if self.has_more(): + return self._queue.pop(0) + else: + return None diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/mqtt_core.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/mqtt_core.py new file mode 100644 index 0000000..e2f98fc --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/mqtt_core.py @@ -0,0 +1,373 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import AWSIoTPythonSDK +from AWSIoTPythonSDK.core.protocol.internal.clients import InternalAsyncMqttClient +from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatusContainer +from AWSIoTPythonSDK.core.protocol.internal.clients import ClientStatus +from AWSIoTPythonSDK.core.protocol.internal.workers import EventProducer +from AWSIoTPythonSDK.core.protocol.internal.workers import EventConsumer +from AWSIoTPythonSDK.core.protocol.internal.workers import SubscriptionManager +from AWSIoTPythonSDK.core.protocol.internal.workers import OfflineRequestsManager +from AWSIoTPythonSDK.core.protocol.internal.requests import RequestTypes +from AWSIoTPythonSDK.core.protocol.internal.requests import QueueableRequest +from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC +from AWSIoTPythonSDK.core.protocol.internal.defaults import DEFAULT_OPERATION_TIMEOUT_SEC +from AWSIoTPythonSDK.core.protocol.internal.defaults import METRICS_PREFIX +from AWSIoTPythonSDK.core.protocol.internal.defaults import ALPN_PROTCOLS +from AWSIoTPythonSDK.core.protocol.internal.events import FixedEventMids +from AWSIoTPythonSDK.core.protocol.paho.client import MQTT_ERR_SUCCESS +from AWSIoTPythonSDK.exception.AWSIoTExceptions import connectError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import connectTimeoutException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import disconnectError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import disconnectTimeoutException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishTimeoutException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishQueueFullException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import publishQueueDisabledException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeQueueFullException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeQueueDisabledException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeQueueFullException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeQueueDisabledException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import subscribeTimeoutException +from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeError +from AWSIoTPythonSDK.exception.AWSIoTExceptions import unsubscribeTimeoutException +from AWSIoTPythonSDK.core.protocol.internal.queues import AppendResults +from AWSIoTPythonSDK.core.util.enums import DropBehaviorTypes +from AWSIoTPythonSDK.core.protocol.paho.client import MQTTv31 +from threading import Condition +from threading import Event +import logging +import sys +if sys.version_info[0] < 3: + from Queue import Queue +else: + from queue import Queue + + +class MqttCore(object): + + _logger = logging.getLogger(__name__) + + def __init__(self, client_id, clean_session, protocol, use_wss): + self._use_wss = use_wss + self._username = "" + self._password = None + self._enable_metrics_collection = True + self._event_queue = Queue() + self._event_cv = Condition() + self._event_producer = EventProducer(self._event_cv, self._event_queue) + self._client_status = ClientStatusContainer() + self._internal_async_client = InternalAsyncMqttClient(client_id, clean_session, protocol, use_wss) + self._subscription_manager = SubscriptionManager() + self._offline_requests_manager = OfflineRequestsManager(-1, DropBehaviorTypes.DROP_NEWEST) # Infinite queue + self._event_consumer = EventConsumer(self._event_cv, + self._event_queue, + self._internal_async_client, + self._subscription_manager, + self._offline_requests_manager, + self._client_status) + self._connect_disconnect_timeout_sec = DEFAULT_CONNECT_DISCONNECT_TIMEOUT_SEC + self._operation_timeout_sec = DEFAULT_OPERATION_TIMEOUT_SEC + self._init_offline_request_exceptions() + self._init_workers() + self._logger.info("MqttCore initialized") + self._logger.info("Client id: %s" % client_id) + self._logger.info("Protocol version: %s" % ("MQTTv3.1" if protocol == MQTTv31 else "MQTTv3.1.1")) + self._logger.info("Authentication type: %s" % ("SigV4 WebSocket" if use_wss else "TLSv1.2 certificate based Mutual Auth.")) + + def _init_offline_request_exceptions(self): + self._offline_request_queue_disabled_exceptions = { + RequestTypes.PUBLISH : publishQueueDisabledException(), + RequestTypes.SUBSCRIBE : subscribeQueueDisabledException(), + RequestTypes.UNSUBSCRIBE : unsubscribeQueueDisabledException() + } + self._offline_request_queue_full_exceptions = { + RequestTypes.PUBLISH : publishQueueFullException(), + RequestTypes.SUBSCRIBE : subscribeQueueFullException(), + RequestTypes.UNSUBSCRIBE : unsubscribeQueueFullException() + } + + def _init_workers(self): + self._internal_async_client.register_internal_event_callbacks(self._event_producer.on_connect, + self._event_producer.on_disconnect, + self._event_producer.on_publish, + self._event_producer.on_subscribe, + self._event_producer.on_unsubscribe, + self._event_producer.on_message) + + def _start_workers(self): + self._event_consumer.start() + + def use_wss(self): + return self._use_wss + + # Used for general message event reception + def on_message(self, message): + pass + + # Used for general online event notification + def on_online(self): + pass + + # Used for general offline event notification + def on_offline(self): + pass + + def configure_cert_credentials(self, cert_credentials_provider): + self._logger.info("Configuring certificates...") + self._internal_async_client.set_cert_credentials_provider(cert_credentials_provider) + + def configure_iam_credentials(self, iam_credentials_provider): + self._logger.info("Configuring custom IAM credentials...") + self._internal_async_client.set_iam_credentials_provider(iam_credentials_provider) + + def configure_endpoint(self, endpoint_provider): + self._logger.info("Configuring endpoint...") + self._internal_async_client.set_endpoint_provider(endpoint_provider) + + def configure_connect_disconnect_timeout_sec(self, connect_disconnect_timeout_sec): + self._logger.info("Configuring connect/disconnect time out: %f sec" % connect_disconnect_timeout_sec) + self._connect_disconnect_timeout_sec = connect_disconnect_timeout_sec + + def configure_operation_timeout_sec(self, operation_timeout_sec): + self._logger.info("Configuring MQTT operation time out: %f sec" % operation_timeout_sec) + self._operation_timeout_sec = operation_timeout_sec + + def configure_reconnect_back_off(self, base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec): + self._logger.info("Configuring reconnect back off timing...") + self._logger.info("Base quiet time: %f sec" % base_reconnect_quiet_sec) + self._logger.info("Max quiet time: %f sec" % max_reconnect_quiet_sec) + self._logger.info("Stable connection time: %f sec" % stable_connection_sec) + self._internal_async_client.configure_reconnect_back_off(base_reconnect_quiet_sec, max_reconnect_quiet_sec, stable_connection_sec) + + def configure_alpn_protocols(self): + self._logger.info("Configuring alpn protocols...") + self._internal_async_client.configure_alpn_protocols([ALPN_PROTCOLS]) + + def configure_last_will(self, topic, payload, qos, retain=False): + self._logger.info("Configuring last will...") + self._internal_async_client.configure_last_will(topic, payload, qos, retain) + + def clear_last_will(self): + self._logger.info("Clearing last will...") + self._internal_async_client.clear_last_will() + + def configure_username_password(self, username, password=None): + self._logger.info("Configuring username and password...") + self._username = username + self._password = password + + def configure_socket_factory(self, socket_factory): + self._logger.info("Configuring socket factory...") + self._internal_async_client.set_socket_factory(socket_factory) + + def enable_metrics_collection(self): + self._enable_metrics_collection = True + + def disable_metrics_collection(self): + self._enable_metrics_collection = False + + def configure_offline_requests_queue(self, max_size, drop_behavior): + self._logger.info("Configuring offline requests queueing: max queue size: %d", max_size) + self._offline_requests_manager = OfflineRequestsManager(max_size, drop_behavior) + self._event_consumer.update_offline_requests_manager(self._offline_requests_manager) + + def configure_draining_interval_sec(self, draining_interval_sec): + self._logger.info("Configuring offline requests queue draining interval: %f sec", draining_interval_sec) + self._event_consumer.update_draining_interval_sec(draining_interval_sec) + + def connect(self, keep_alive_sec): + self._logger.info("Performing sync connect...") + event = Event() + self.connect_async(keep_alive_sec, self._create_blocking_ack_callback(event)) + if not event.wait(self._connect_disconnect_timeout_sec): + self._logger.error("Connect timed out") + raise connectTimeoutException() + return True + + def connect_async(self, keep_alive_sec, ack_callback=None): + self._logger.info("Performing async connect...") + self._logger.info("Keep-alive: %f sec" % keep_alive_sec) + self._start_workers() + self._load_callbacks() + self._load_username_password() + + try: + self._client_status.set_status(ClientStatus.CONNECT) + rc = self._internal_async_client.connect(keep_alive_sec, ack_callback) + if MQTT_ERR_SUCCESS != rc: + self._logger.error("Connect error: %d", rc) + raise connectError(rc) + except Exception as e: + # Provided any error in connect, we should clean up the threads that have been created + self._event_consumer.stop() + if not self._event_consumer.wait_until_it_stops(self._connect_disconnect_timeout_sec): + self._logger.error("Time out in waiting for event consumer to stop") + else: + self._logger.debug("Event consumer stopped") + self._client_status.set_status(ClientStatus.IDLE) + raise e + + return FixedEventMids.CONNACK_MID + + def _load_callbacks(self): + self._logger.debug("Passing in general notification callbacks to internal client...") + self._internal_async_client.on_online = self.on_online + self._internal_async_client.on_offline = self.on_offline + self._internal_async_client.on_message = self.on_message + + def _load_username_password(self): + username_candidate = self._username + if self._enable_metrics_collection: + username_candidate += METRICS_PREFIX + username_candidate += AWSIoTPythonSDK.__version__ + self._internal_async_client.set_username_password(username_candidate, self._password) + + def disconnect(self): + self._logger.info("Performing sync disconnect...") + event = Event() + self.disconnect_async(self._create_blocking_ack_callback(event)) + if not event.wait(self._connect_disconnect_timeout_sec): + self._logger.error("Disconnect timed out") + raise disconnectTimeoutException() + if not self._event_consumer.wait_until_it_stops(self._connect_disconnect_timeout_sec): + self._logger.error("Disconnect timed out in waiting for event consumer") + raise disconnectTimeoutException() + return True + + def disconnect_async(self, ack_callback=None): + self._logger.info("Performing async disconnect...") + self._client_status.set_status(ClientStatus.USER_DISCONNECT) + rc = self._internal_async_client.disconnect(ack_callback) + if MQTT_ERR_SUCCESS != rc: + self._logger.error("Disconnect error: %d", rc) + raise disconnectError(rc) + return FixedEventMids.DISCONNECT_MID + + def publish(self, topic, payload, qos, retain=False): + self._logger.info("Performing sync publish...") + ret = False + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.PUBLISH, (topic, payload, qos, retain)) + else: + if qos > 0: + event = Event() + rc, mid = self._publish_async(topic, payload, qos, retain, self._create_blocking_ack_callback(event)) + if not event.wait(self._operation_timeout_sec): + self._internal_async_client.remove_event_callback(mid) + self._logger.error("Publish timed out") + raise publishTimeoutException() + else: + self._publish_async(topic, payload, qos, retain) + ret = True + return ret + + def publish_async(self, topic, payload, qos, retain=False, ack_callback=None): + self._logger.info("Performing async publish...") + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.PUBLISH, (topic, payload, qos, retain)) + return FixedEventMids.QUEUED_MID + else: + rc, mid = self._publish_async(topic, payload, qos, retain, ack_callback) + return mid + + def _publish_async(self, topic, payload, qos, retain=False, ack_callback=None): + rc, mid = self._internal_async_client.publish(topic, payload, qos, retain, ack_callback) + if MQTT_ERR_SUCCESS != rc: + self._logger.error("Publish error: %d", rc) + raise publishError(rc) + return rc, mid + + def subscribe(self, topic, qos, message_callback=None): + self._logger.info("Performing sync subscribe...") + ret = False + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.SUBSCRIBE, (topic, qos, message_callback, None)) + else: + event = Event() + rc, mid = self._subscribe_async(topic, qos, self._create_blocking_ack_callback(event), message_callback) + if not event.wait(self._operation_timeout_sec): + self._internal_async_client.remove_event_callback(mid) + self._logger.error("Subscribe timed out") + raise subscribeTimeoutException() + ret = True + return ret + + def subscribe_async(self, topic, qos, ack_callback=None, message_callback=None): + self._logger.info("Performing async subscribe...") + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.SUBSCRIBE, (topic, qos, message_callback, ack_callback)) + return FixedEventMids.QUEUED_MID + else: + rc, mid = self._subscribe_async(topic, qos, ack_callback, message_callback) + return mid + + def _subscribe_async(self, topic, qos, ack_callback=None, message_callback=None): + self._subscription_manager.add_record(topic, qos, message_callback, ack_callback) + rc, mid = self._internal_async_client.subscribe(topic, qos, ack_callback) + if MQTT_ERR_SUCCESS != rc: + self._logger.error("Subscribe error: %d", rc) + raise subscribeError(rc) + return rc, mid + + def unsubscribe(self, topic): + self._logger.info("Performing sync unsubscribe...") + ret = False + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.UNSUBSCRIBE, (topic, None)) + else: + event = Event() + rc, mid = self._unsubscribe_async(topic, self._create_blocking_ack_callback(event)) + if not event.wait(self._operation_timeout_sec): + self._internal_async_client.remove_event_callback(mid) + self._logger.error("Unsubscribe timed out") + raise unsubscribeTimeoutException() + ret = True + return ret + + def unsubscribe_async(self, topic, ack_callback=None): + self._logger.info("Performing async unsubscribe...") + if ClientStatus.STABLE != self._client_status.get_status(): + self._handle_offline_request(RequestTypes.UNSUBSCRIBE, (topic, ack_callback)) + return FixedEventMids.QUEUED_MID + else: + rc, mid = self._unsubscribe_async(topic, ack_callback) + return mid + + def _unsubscribe_async(self, topic, ack_callback=None): + self._subscription_manager.remove_record(topic) + rc, mid = self._internal_async_client.unsubscribe(topic, ack_callback) + if MQTT_ERR_SUCCESS != rc: + self._logger.error("Unsubscribe error: %d", rc) + raise unsubscribeError(rc) + return rc, mid + + def _create_blocking_ack_callback(self, event): + def ack_callback(mid, data=None): + event.set() + return ack_callback + + def _handle_offline_request(self, type, data): + self._logger.info("Offline request detected!") + offline_request = QueueableRequest(type, data) + append_result = self._offline_requests_manager.add_one(offline_request) + if AppendResults.APPEND_FAILURE_QUEUE_DISABLED == append_result: + self._logger.error("Offline request queue has been disabled") + raise self._offline_request_queue_disabled_exceptions[type] + if AppendResults.APPEND_FAILURE_QUEUE_FULL == append_result: + self._logger.error("Offline request queue is full") + raise self._offline_request_queue_full_exceptions[type] diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/paho/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/paho/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/paho/client.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/paho/client.py new file mode 100644 index 0000000..503d1c6 --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/protocol/paho/client.py @@ -0,0 +1,2445 @@ +# Copyright (c) 2012-2014 Roger Light +# +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the Eclipse Public License v1.0 +# and Eclipse Distribution License v1.0 which accompany this distribution. +# +# The Eclipse Public License is available at +# http://www.eclipse.org/legal/epl-v10.html +# and the Eclipse Distribution License is available at +# http://www.eclipse.org/org/documents/edl-v10.php. +# +# Contributors: +# Roger Light - initial API and implementation + +""" +This is an MQTT v3.1 client module. MQTT is a lightweight pub/sub messaging +protocol that is easy to implement and suitable for low powered devices. +""" +import errno +import platform +import random +import select +import socket +HAVE_SSL = True +try: + import ssl + cert_reqs = ssl.CERT_REQUIRED + tls_version = ssl.PROTOCOL_TLSv1 +except: + HAVE_SSL = False + cert_reqs = None + tls_version = None +import struct +import sys +import threading +import time +HAVE_DNS = True +try: + import dns.resolver +except ImportError: + HAVE_DNS = False + +if platform.system() == 'Windows': + EAGAIN = errno.WSAEWOULDBLOCK +else: + EAGAIN = errno.EAGAIN + +from AWSIoTPythonSDK.core.protocol.connection.cores import ProgressiveBackOffCore +from AWSIoTPythonSDK.core.protocol.connection.cores import SecuredWebSocketCore +from AWSIoTPythonSDK.core.protocol.connection.alpn import SSLContextBuilder + +VERSION_MAJOR=1 +VERSION_MINOR=0 +VERSION_REVISION=0 +VERSION_NUMBER=(VERSION_MAJOR*1000000+VERSION_MINOR*1000+VERSION_REVISION) + +MQTTv31 = 3 +MQTTv311 = 4 + +if sys.version_info[0] < 3: + PROTOCOL_NAMEv31 = "MQIsdp" + PROTOCOL_NAMEv311 = "MQTT" +else: + PROTOCOL_NAMEv31 = b"MQIsdp" + PROTOCOL_NAMEv311 = b"MQTT" + +PROTOCOL_VERSION = 3 + +# Message types +CONNECT = 0x10 +CONNACK = 0x20 +PUBLISH = 0x30 +PUBACK = 0x40 +PUBREC = 0x50 +PUBREL = 0x60 +PUBCOMP = 0x70 +SUBSCRIBE = 0x80 +SUBACK = 0x90 +UNSUBSCRIBE = 0xA0 +UNSUBACK = 0xB0 +PINGREQ = 0xC0 +PINGRESP = 0xD0 +DISCONNECT = 0xE0 + +# Log levels +MQTT_LOG_INFO = 0x01 +MQTT_LOG_NOTICE = 0x02 +MQTT_LOG_WARNING = 0x04 +MQTT_LOG_ERR = 0x08 +MQTT_LOG_DEBUG = 0x10 + +# CONNACK codes +CONNACK_ACCEPTED = 0 +CONNACK_REFUSED_PROTOCOL_VERSION = 1 +CONNACK_REFUSED_IDENTIFIER_REJECTED = 2 +CONNACK_REFUSED_SERVER_UNAVAILABLE = 3 +CONNACK_REFUSED_BAD_USERNAME_PASSWORD = 4 +CONNACK_REFUSED_NOT_AUTHORIZED = 5 + +# Connection state +mqtt_cs_new = 0 +mqtt_cs_connected = 1 +mqtt_cs_disconnecting = 2 +mqtt_cs_connect_async = 3 + +# Message state +mqtt_ms_invalid = 0 +mqtt_ms_publish= 1 +mqtt_ms_wait_for_puback = 2 +mqtt_ms_wait_for_pubrec = 3 +mqtt_ms_resend_pubrel = 4 +mqtt_ms_wait_for_pubrel = 5 +mqtt_ms_resend_pubcomp = 6 +mqtt_ms_wait_for_pubcomp = 7 +mqtt_ms_send_pubrec = 8 +mqtt_ms_queued = 9 + +# Error values +MQTT_ERR_AGAIN = -1 +MQTT_ERR_SUCCESS = 0 +MQTT_ERR_NOMEM = 1 +MQTT_ERR_PROTOCOL = 2 +MQTT_ERR_INVAL = 3 +MQTT_ERR_NO_CONN = 4 +MQTT_ERR_CONN_REFUSED = 5 +MQTT_ERR_NOT_FOUND = 6 +MQTT_ERR_CONN_LOST = 7 +MQTT_ERR_TLS = 8 +MQTT_ERR_PAYLOAD_SIZE = 9 +MQTT_ERR_NOT_SUPPORTED = 10 +MQTT_ERR_AUTH = 11 +MQTT_ERR_ACL_DENIED = 12 +MQTT_ERR_UNKNOWN = 13 +MQTT_ERR_ERRNO = 14 + +# MessageQueueing DropBehavior +MSG_QUEUEING_DROP_OLDEST = 0 +MSG_QUEUEING_DROP_NEWEST = 1 + +if sys.version_info[0] < 3: + sockpair_data = "0" +else: + sockpair_data = b"0" + +def error_string(mqtt_errno): + """Return the error string associated with an mqtt error number.""" + if mqtt_errno == MQTT_ERR_SUCCESS: + return "No error." + elif mqtt_errno == MQTT_ERR_NOMEM: + return "Out of memory." + elif mqtt_errno == MQTT_ERR_PROTOCOL: + return "A network protocol error occurred when communicating with the broker." + elif mqtt_errno == MQTT_ERR_INVAL: + return "Invalid function arguments provided." + elif mqtt_errno == MQTT_ERR_NO_CONN: + return "The client is not currently connected." + elif mqtt_errno == MQTT_ERR_CONN_REFUSED: + return "The connection was refused." + elif mqtt_errno == MQTT_ERR_NOT_FOUND: + return "Message not found (internal error)." + elif mqtt_errno == MQTT_ERR_CONN_LOST: + return "The connection was lost." + elif mqtt_errno == MQTT_ERR_TLS: + return "A TLS error occurred." + elif mqtt_errno == MQTT_ERR_PAYLOAD_SIZE: + return "Payload too large." + elif mqtt_errno == MQTT_ERR_NOT_SUPPORTED: + return "This feature is not supported." + elif mqtt_errno == MQTT_ERR_AUTH: + return "Authorisation failed." + elif mqtt_errno == MQTT_ERR_ACL_DENIED: + return "Access denied by ACL." + elif mqtt_errno == MQTT_ERR_UNKNOWN: + return "Unknown error." + elif mqtt_errno == MQTT_ERR_ERRNO: + return "Error defined by errno." + else: + return "Unknown error." + + +def connack_string(connack_code): + """Return the string associated with a CONNACK result.""" + if connack_code == 0: + return "Connection Accepted." + elif connack_code == 1: + return "Connection Refused: unacceptable protocol version." + elif connack_code == 2: + return "Connection Refused: identifier rejected." + elif connack_code == 3: + return "Connection Refused: broker unavailable." + elif connack_code == 4: + return "Connection Refused: bad user name or password." + elif connack_code == 5: + return "Connection Refused: not authorised." + else: + return "Connection Refused: unknown reason." + + +def topic_matches_sub(sub, topic): + """Check whether a topic matches a subscription. + + For example: + + foo/bar would match the subscription foo/# or +/bar + non/matching would not match the subscription non/+/+ + """ + result = True + multilevel_wildcard = False + + slen = len(sub) + tlen = len(topic) + + if slen > 0 and tlen > 0: + if (sub[0] == '$' and topic[0] != '$') or (topic[0] == '$' and sub[0] != '$'): + return False + + spos = 0 + tpos = 0 + + while spos < slen and tpos < tlen: + if sub[spos] == topic[tpos]: + if tpos == tlen-1: + # Check for e.g. foo matching foo/# + if spos == slen-3 and sub[spos+1] == '/' and sub[spos+2] == '#': + result = True + multilevel_wildcard = True + break + + spos += 1 + tpos += 1 + + if tpos == tlen and spos == slen-1 and sub[spos] == '+': + spos += 1 + result = True + break + else: + if sub[spos] == '+': + spos += 1 + while tpos < tlen and topic[tpos] != '/': + tpos += 1 + if tpos == tlen and spos == slen: + result = True + break + + elif sub[spos] == '#': + multilevel_wildcard = True + if spos+1 != slen: + result = False + break + else: + result = True + break + + else: + result = False + break + + if not multilevel_wildcard and (tpos < tlen or spos < slen): + result = False + + return result + + +def _socketpair_compat(): + """TCP/IP socketpair including Windows support""" + listensock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_IP) + listensock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + listensock.bind(("127.0.0.1", 0)) + listensock.listen(1) + + iface, port = listensock.getsockname() + sock1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_IP) + sock1.setblocking(0) + try: + sock1.connect(("127.0.0.1", port)) + except socket.error as err: + if err.errno != errno.EINPROGRESS and err.errno != errno.EWOULDBLOCK and err.errno != EAGAIN: + raise + sock2, address = listensock.accept() + sock2.setblocking(0) + listensock.close() + return (sock1, sock2) + + +class MQTTMessage: + """ This is a class that describes an incoming message. It is passed to the + on_message callback as the message parameter. + + Members: + + topic : String. topic that the message was published on. + payload : String/bytes the message payload. + qos : Integer. The message Quality of Service 0, 1 or 2. + retain : Boolean. If true, the message is a retained message and not fresh. + mid : Integer. The message id. + """ + def __init__(self): + self.timestamp = 0 + self.state = mqtt_ms_invalid + self.dup = False + self.mid = 0 + self.topic = "" + self.payload = None + self.qos = 0 + self.retain = False + + +class Client(object): + """MQTT version 3.1/3.1.1 client class. + + This is the main class for use communicating with an MQTT broker. + + General usage flow: + + * Use connect()/connect_async() to connect to a broker + * Call loop() frequently to maintain network traffic flow with the broker + * Or use loop_start() to set a thread running to call loop() for you. + * Or use loop_forever() to handle calling loop() for you in a blocking + * function. + * Use subscribe() to subscribe to a topic and receive messages + * Use publish() to send messages + * Use disconnect() to disconnect from the broker + + Data returned from the broker is made available with the use of callback + functions as described below. + + Callbacks + ========= + + A number of callback functions are available to receive data back from the + broker. To use a callback, define a function and then assign it to the + client: + + def on_connect(client, userdata, flags, rc): + print("Connection returned " + str(rc)) + + client.on_connect = on_connect + + All of the callbacks as described below have a "client" and an "userdata" + argument. "client" is the Client instance that is calling the callback. + "userdata" is user data of any type and can be set when creating a new client + instance or with user_data_set(userdata). + + The callbacks: + + on_connect(client, userdata, flags, rc): called when the broker responds to our connection + request. + flags is a dict that contains response flags from the broker: + flags['session present'] - this flag is useful for clients that are + using clean session set to 0 only. If a client with clean + session=0, that reconnects to a broker that it has previously + connected to, this flag indicates whether the broker still has the + session information for the client. If 1, the session still exists. + The value of rc determines success or not: + 0: Connection successful + 1: Connection refused - incorrect protocol version + 2: Connection refused - invalid client identifier + 3: Connection refused - server unavailable + 4: Connection refused - bad username or password + 5: Connection refused - not authorised + 6-255: Currently unused. + + on_disconnect(client, userdata, rc): called when the client disconnects from the broker. + The rc parameter indicates the disconnection state. If MQTT_ERR_SUCCESS + (0), the callback was called in response to a disconnect() call. If any + other value the disconnection was unexpected, such as might be caused by + a network error. + + on_message(client, userdata, message): called when a message has been received on a + topic that the client subscribes to. The message variable is a + MQTTMessage that describes all of the message parameters. + + on_publish(client, userdata, mid): called when a message that was to be sent using the + publish() call has completed transmission to the broker. For messages + with QoS levels 1 and 2, this means that the appropriate handshakes have + completed. For QoS 0, this simply means that the message has left the + client. The mid variable matches the mid variable returned from the + corresponding publish() call, to allow outgoing messages to be tracked. + This callback is important because even if the publish() call returns + success, it does not always mean that the message has been sent. + + on_subscribe(client, userdata, mid, granted_qos): called when the broker responds to a + subscribe request. The mid variable matches the mid variable returned + from the corresponding subscribe() call. The granted_qos variable is a + list of integers that give the QoS level the broker has granted for each + of the different subscription requests. + + on_unsubscribe(client, userdata, mid): called when the broker responds to an unsubscribe + request. The mid variable matches the mid variable returned from the + corresponding unsubscribe() call. + + on_log(client, userdata, level, buf): called when the client has log information. Define + to allow debugging. The level variable gives the severity of the message + and will be one of MQTT_LOG_INFO, MQTT_LOG_NOTICE, MQTT_LOG_WARNING, + MQTT_LOG_ERR, and MQTT_LOG_DEBUG. The message itself is in buf. + + """ + def __init__(self, client_id="", clean_session=True, userdata=None, protocol=MQTTv31, useSecuredWebsocket=False): + """client_id is the unique client id string used when connecting to the + broker. If client_id is zero length or None, then one will be randomly + generated. In this case, clean_session must be True. If this is not the + case a ValueError will be raised. + + clean_session is a boolean that determines the client type. If True, + the broker will remove all information about this client when it + disconnects. If False, the client is a persistent client and + subscription information and queued messages will be retained when the + client disconnects. + Note that a client will never discard its own outgoing messages on + disconnect. Calling connect() or reconnect() will cause the messages to + be resent. Use reinitialise() to reset a client to its original state. + + userdata is user defined data of any type that is passed as the "userdata" + parameter to callbacks. It may be updated at a later point with the + user_data_set() function. + + The protocol argument allows explicit setting of the MQTT version to + use for this client. Can be paho.mqtt.client.MQTTv311 (v3.1.1) or + paho.mqtt.client.MQTTv31 (v3.1), with the default being v3.1. If the + broker reports that the client connected with an invalid protocol + version, the client will automatically attempt to reconnect using v3.1 + instead. + + useSecuredWebsocket is a boolean that determines whether the client uses + MQTT over Websocket with sigV4 signing (True) or MQTT with plain TCP + socket. If True, the client will try to find AWS_ACCESS_KEY_ID and + AWS_SECRET_ACCESS_KEY in the system environment variables and start the + sigV4 signing and Websocket handshake. Under this configuration, all + outbound MQTT packets will be wrapped around with Websocket framework. All + inbound MQTT packets will be automatically wss-decoded. + """ + if not clean_session and (client_id == "" or client_id is None): + raise ValueError('A client id must be provided if clean session is False.') + + self._protocol = protocol + self._userdata = userdata + self._sock = None + self._sockpairR, self._sockpairW = _socketpair_compat() + self._keepalive = 60 + self._message_retry = 20 + self._last_retry_check = 0 + self._clean_session = clean_session + if client_id == "" or client_id is None: + self._client_id = "paho/" + "".join(random.choice("0123456789ADCDEF") for x in range(23-5)) + else: + self._client_id = client_id + + self._username = "" + self._password = "" + self._in_packet = { + "command": 0, + "have_remaining": 0, + "remaining_count": [], + "remaining_mult": 1, + "remaining_length": 0, + "packet": b"", + "to_process": 0, + "pos": 0} + self._out_packet = [] + self._current_out_packet = None + self._last_msg_in = time.time() + self._last_msg_out = time.time() + self._ping_t = 0 + self._last_mid = 0 + self._state = mqtt_cs_new + self._max_inflight_messages = 20 + self._out_messages = [] + self._in_messages = [] + self._inflight_messages = 0 + self._will = False + self._will_topic = "" + self._will_payload = None + self._will_qos = 0 + self._will_retain = False + self.on_disconnect = None + self.on_connect = None + self.on_publish = None + self.on_message = None + self.on_message_filtered = [] + self.on_subscribe = None + self.on_unsubscribe = None + self.on_log = None + self._host = "" + self._port = 1883 + self._bind_address = "" + self._socket_factory = None + self._in_callback = False + self._strict_protocol = False + self._callback_mutex = threading.Lock() + self._state_mutex = threading.Lock() + self._out_packet_mutex = threading.Lock() + self._current_out_packet_mutex = threading.Lock() + self._msgtime_mutex = threading.Lock() + self._out_message_mutex = threading.Lock() + self._in_message_mutex = threading.Lock() + self._thread = None + self._thread_terminate = False + self._ssl = None + self._tls_certfile = None + self._tls_keyfile = None + self._tls_ca_certs = None + self._tls_cert_reqs = None + self._tls_ciphers = None + self._tls_version = tls_version + self._tls_insecure = False + self._useSecuredWebsocket = useSecuredWebsocket # Do we enable secured websocket + self._backoffCore = ProgressiveBackOffCore() # Init the backoffCore using default configuration + self._AWSAccessKeyIDCustomConfig = "" + self._AWSSecretAccessKeyCustomConfig = "" + self._AWSSessionTokenCustomConfig = "" + self._alpn_protocols = None + + def __del__(self): + pass + + + def setBackoffTiming(self, srcBaseReconnectTimeSecond, srcMaximumReconnectTimeSecond, srcMinimumConnectTimeSecond): + """ + Make custom settings for backoff timing for reconnect logic + srcBaseReconnectTimeSecond - The base reconnection time in seconds + srcMaximumReconnectTimeSecond - The maximum reconnection time in seconds + srcMinimumConnectTimeSecond - The minimum time in seconds that a connection must be maintained in order to be considered stable + * Raise ValueError if input params are malformed + """ + self._backoffCore.configTime(srcBaseReconnectTimeSecond, srcMaximumReconnectTimeSecond, srcMinimumConnectTimeSecond) + + def configIAMCredentials(self, srcAWSAccessKeyID, srcAWSSecretAccessKey, srcAWSSessionToken): + """ + Make custom settings for IAM credentials for websocket connection + srcAWSAccessKeyID - AWS IAM access key + srcAWSSecretAccessKey - AWS IAM secret key + srcAWSSessionToken - AWS Session Token + """ + self._AWSAccessKeyIDCustomConfig = srcAWSAccessKeyID + self._AWSSecretAccessKeyCustomConfig = srcAWSSecretAccessKey + self._AWSSessionTokenCustomConfig = srcAWSSessionToken + + def config_alpn_protocols(self, alpn_protocols): + """ + Make custom settings for ALPN protocols + :param alpn_protocols: Array of strings that specifies the alpn protocols to be used + :return: None + """ + self._alpn_protocols = alpn_protocols + + def reinitialise(self, client_id="", clean_session=True, userdata=None): + if self._ssl: + self._ssl.close() + self._ssl = None + self._sock = None + elif self._sock: + self._sock.close() + self._sock = None + if self._sockpairR: + self._sockpairR.close() + self._sockpairR = None + if self._sockpairW: + self._sockpairW.close() + self._sockpairW = None + + self.__init__(client_id, clean_session, userdata) + + def tls_set(self, ca_certs, certfile=None, keyfile=None, cert_reqs=cert_reqs, tls_version=tls_version, ciphers=None): + """Configure network encryption and authentication options. Enables SSL/TLS support. + + ca_certs : a string path to the Certificate Authority certificate files + that are to be treated as trusted by this client. If this is the only + option given then the client will operate in a similar manner to a web + browser. That is to say it will require the broker to have a + certificate signed by the Certificate Authorities in ca_certs and will + communicate using TLS v1, but will not attempt any form of + authentication. This provides basic network encryption but may not be + sufficient depending on how the broker is configured. + + certfile and keyfile are strings pointing to the PEM encoded client + certificate and private keys respectively. If these arguments are not + None then they will be used as client information for TLS based + authentication. Support for this feature is broker dependent. Note + that if either of these files in encrypted and needs a password to + decrypt it, Python will ask for the password at the command line. It is + not currently possible to define a callback to provide the password. + + cert_reqs allows the certificate requirements that the client imposes + on the broker to be changed. By default this is ssl.CERT_REQUIRED, + which means that the broker must provide a certificate. See the ssl + pydoc for more information on this parameter. + + tls_version allows the version of the SSL/TLS protocol used to be + specified. By default TLS v1 is used. Previous versions (all versions + beginning with SSL) are possible but not recommended due to possible + security problems. + + ciphers is a string specifying which encryption ciphers are allowable + for this connection, or None to use the defaults. See the ssl pydoc for + more information. + + Must be called before connect() or connect_async().""" + if HAVE_SSL is False: + raise ValueError('This platform has no SSL/TLS.') + + if sys.version < '2.7': + raise ValueError('Python 2.7 is the minimum supported version for TLS.') + + if ca_certs is None: + raise ValueError('ca_certs must not be None.') + + try: + f = open(ca_certs, "r") + except IOError as err: + raise IOError(ca_certs+": "+err.strerror) + else: + f.close() + if certfile is not None: + try: + f = open(certfile, "r") + except IOError as err: + raise IOError(certfile+": "+err.strerror) + else: + f.close() + if keyfile is not None: + try: + f = open(keyfile, "r") + except IOError as err: + raise IOError(keyfile+": "+err.strerror) + else: + f.close() + + self._tls_ca_certs = ca_certs + self._tls_certfile = certfile + self._tls_keyfile = keyfile + self._tls_cert_reqs = cert_reqs + self._tls_version = tls_version + self._tls_ciphers = ciphers + + def tls_insecure_set(self, value): + """Configure verification of the server hostname in the server certificate. + + If value is set to true, it is impossible to guarantee that the host + you are connecting to is not impersonating your server. This can be + useful in initial server testing, but makes it possible for a malicious + third party to impersonate your server through DNS spoofing, for + example. + + Do not use this function in a real system. Setting value to true means + there is no point using encryption. + + Must be called before connect().""" + if HAVE_SSL is False: + raise ValueError('This platform has no SSL/TLS.') + + self._tls_insecure = value + + def connect(self, host, port=1883, keepalive=60, bind_address=""): + """Connect to a remote broker. + + host is the hostname or IP address of the remote broker. + port is the network port of the server host to connect to. Defaults to + 1883. Note that the default port for MQTT over SSL/TLS is 8883 so if you + are using tls_set() the port may need providing. + keepalive: Maximum period in seconds between communications with the + broker. If no other messages are being exchanged, this controls the + rate at which the client will send ping messages to the broker. + """ + self.connect_async(host, port, keepalive, bind_address) + return self.reconnect() + + def connect_srv(self, domain=None, keepalive=60, bind_address=""): + """Connect to a remote broker. + + domain is the DNS domain to search for SRV records; if None, + try to determine local domain name. + keepalive and bind_address are as for connect() + """ + + if HAVE_DNS is False: + raise ValueError('No DNS resolver library found.') + + if domain is None: + domain = socket.getfqdn() + domain = domain[domain.find('.') + 1:] + + try: + rr = '_mqtt._tcp.%s' % domain + if self._ssl is not None: + # IANA specifies secure-mqtt (not mqtts) for port 8883 + rr = '_secure-mqtt._tcp.%s' % domain + answers = [] + for answer in dns.resolver.query(rr, dns.rdatatype.SRV): + addr = answer.target.to_text()[:-1] + answers.append((addr, answer.port, answer.priority, answer.weight)) + except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer, dns.resolver.NoNameservers): + raise ValueError("No answer/NXDOMAIN for SRV in %s" % (domain)) + + # FIXME: doesn't account for weight + for answer in answers: + host, port, prio, weight = answer + + try: + return self.connect(host, port, keepalive, bind_address) + except: + pass + + raise ValueError("No SRV hosts responded") + + def connect_async(self, host, port=1883, keepalive=60, bind_address=""): + """Connect to a remote broker asynchronously. This is a non-blocking + connect call that can be used with loop_start() to provide very quick + start. + + host is the hostname or IP address of the remote broker. + port is the network port of the server host to connect to. Defaults to + 1883. Note that the default port for MQTT over SSL/TLS is 8883 so if you + are using tls_set() the port may need providing. + keepalive: Maximum period in seconds between communications with the + broker. If no other messages are being exchanged, this controls the + rate at which the client will send ping messages to the broker. + """ + if host is None or len(host) == 0: + raise ValueError('Invalid host.') + if port <= 0: + raise ValueError('Invalid port number.') + if keepalive < 0: + raise ValueError('Keepalive must be >=0.') + if bind_address != "" and bind_address is not None: + if (sys.version_info[0] == 2 and sys.version_info[1] < 7) or (sys.version_info[0] == 3 and sys.version_info[1] < 2): + raise ValueError('bind_address requires Python 2.7 or 3.2.') + + self._host = host + self._port = port + self._keepalive = keepalive + self._bind_address = bind_address + + self._state_mutex.acquire() + self._state = mqtt_cs_connect_async + self._state_mutex.release() + + def reconnect(self): + """Reconnect the client after a disconnect. Can only be called after + connect()/connect_async().""" + if len(self._host) == 0: + raise ValueError('Invalid host.') + if self._port <= 0: + raise ValueError('Invalid port number.') + + self._in_packet = { + "command": 0, + "have_remaining": 0, + "remaining_count": [], + "remaining_mult": 1, + "remaining_length": 0, + "packet": b"", + "to_process": 0, + "pos": 0} + + self._out_packet_mutex.acquire() + self._out_packet = [] + self._out_packet_mutex.release() + + self._current_out_packet_mutex.acquire() + self._current_out_packet = None + self._current_out_packet_mutex.release() + + self._msgtime_mutex.acquire() + self._last_msg_in = time.time() + self._last_msg_out = time.time() + self._msgtime_mutex.release() + + self._ping_t = 0 + self._state_mutex.acquire() + self._state = mqtt_cs_new + self._state_mutex.release() + if self._ssl: + self._ssl.close() + self._ssl = None + self._sock = None + elif self._sock: + self._sock.close() + self._sock = None + + # Put messages in progress in a valid state. + self._messages_reconnect_reset() + + try: + if self._socket_factory: + sock = self._socket_factory() + elif (sys.version_info[0] == 2 and sys.version_info[1] < 7) or (sys.version_info[0] == 3 and sys.version_info[1] < 2): + sock = socket.create_connection((self._host, self._port)) + else: + sock = socket.create_connection((self._host, self._port), source_address=(self._bind_address, 0)) + except socket.error as err: + if err.errno != errno.EINPROGRESS and err.errno != errno.EWOULDBLOCK and err.errno != EAGAIN: + raise + + verify_hostname = self._tls_insecure is False # Decide whether we need to verify hostname + + if self._tls_ca_certs is not None: + if self._useSecuredWebsocket: + # Never assign to ._ssl before wss handshake is finished + # Non-None value for ._ssl will allow ops before wss-MQTT connection is established + rawSSL = ssl.wrap_socket(sock, ca_certs=self._tls_ca_certs, cert_reqs=ssl.CERT_REQUIRED) # Add server certificate verification + rawSSL.setblocking(0) # Non-blocking socket + self._ssl = SecuredWebSocketCore(rawSSL, self._host, self._port, self._AWSAccessKeyIDCustomConfig, self._AWSSecretAccessKeyCustomConfig, self._AWSSessionTokenCustomConfig) # Override the _ssl socket + # self._ssl.enableDebug() + elif self._alpn_protocols is not None: + # SSLContext is required to enable ALPN support + # Assuming Python 2.7.10+/3.5+ till the end of this elif branch + ssl_context = SSLContextBuilder()\ + .with_ca_certs(self._tls_ca_certs)\ + .with_cert_key_pair(self._tls_certfile, self._tls_keyfile)\ + .with_cert_reqs(self._tls_cert_reqs)\ + .with_check_hostname(True)\ + .with_ciphers(self._tls_ciphers)\ + .with_alpn_protocols(self._alpn_protocols)\ + .build() + self._ssl = ssl_context.wrap_socket(sock, server_hostname=self._host, do_handshake_on_connect=False) + verify_hostname = False # Since check_hostname in SSLContext is already set to True, no need to verify it again + self._ssl.do_handshake() + else: + self._ssl = ssl.wrap_socket( + sock, + certfile=self._tls_certfile, + keyfile=self._tls_keyfile, + ca_certs=self._tls_ca_certs, + cert_reqs=self._tls_cert_reqs, + ssl_version=self._tls_version, + ciphers=self._tls_ciphers) + + if verify_hostname: + if sys.version_info[0] < 3 or (sys.version_info[0] == 3 and sys.version_info[1] < 5): # No IP host match before 3.5.x + self._tls_match_hostname() + else: + ssl.match_hostname(self._ssl.getpeercert(), self._host) + + self._sock = sock + + if self._ssl and not self._useSecuredWebsocket: + self._ssl.setblocking(0) # For X.509 cert mutual auth. + elif not self._ssl: + self._sock.setblocking(0) # For plain socket + else: + pass # For MQTT over WebSocket + + return self._send_connect(self._keepalive, self._clean_session) + + def loop(self, timeout=1.0, max_packets=1): + """Process network events. + + This function must be called regularly to ensure communication with the + broker is carried out. It calls select() on the network socket to wait + for network events. If incoming data is present it will then be + processed. Outgoing commands, from e.g. publish(), are normally sent + immediately that their function is called, but this is not always + possible. loop() will also attempt to send any remaining outgoing + messages, which also includes commands that are part of the flow for + messages with QoS>0. + + timeout: The time in seconds to wait for incoming/outgoing network + traffic before timing out and returning. + max_packets: Not currently used. + + Returns MQTT_ERR_SUCCESS on success. + Returns >0 on error. + + A ValueError will be raised if timeout < 0""" + if timeout < 0.0: + raise ValueError('Invalid timeout.') + + self._current_out_packet_mutex.acquire() + self._out_packet_mutex.acquire() + if self._current_out_packet is None and len(self._out_packet) > 0: + self._current_out_packet = self._out_packet.pop(0) + + if self._current_out_packet: + wlist = [self.socket()] + else: + wlist = [] + self._out_packet_mutex.release() + self._current_out_packet_mutex.release() + + # sockpairR is used to break out of select() before the timeout, on a + # call to publish() etc. + rlist = [self.socket(), self._sockpairR] + try: + socklist = select.select(rlist, wlist, [], timeout) + except TypeError as e: + # Socket isn't correct type, in likelihood connection is lost + return MQTT_ERR_CONN_LOST + except ValueError: + # Can occur if we just reconnected but rlist/wlist contain a -1 for + # some reason. + return MQTT_ERR_CONN_LOST + except: + return MQTT_ERR_UNKNOWN + + if self.socket() in socklist[0]: + rc = self.loop_read(max_packets) + if rc or (self._ssl is None and self._sock is None): + return rc + + if self._sockpairR in socklist[0]: + # Stimulate output write even though we didn't ask for it, because + # at that point the publish or other command wasn't present. + socklist[1].insert(0, self.socket()) + # Clear sockpairR - only ever a single byte written. + try: + self._sockpairR.recv(1) + except socket.error as err: + if err.errno != EAGAIN: + raise + + if self.socket() in socklist[1]: + rc = self.loop_write(max_packets) + if rc or (self._ssl is None and self._sock is None): + return rc + + return self.loop_misc() + + def publish(self, topic, payload=None, qos=0, retain=False): + """Publish a message on a topic. + + This causes a message to be sent to the broker and subsequently from + the broker to any clients subscribing to matching topics. + + topic: The topic that the message should be published on. + payload: The actual message to send. If not given, or set to None a + zero length message will be used. Passing an int or float will result + in the payload being converted to a string representing that number. If + you wish to send a true int/float, use struct.pack() to create the + payload you require. + qos: The quality of service level to use. + retain: If set to true, the message will be set as the "last known + good"/retained message for the topic. + + Returns a tuple (result, mid), where result is MQTT_ERR_SUCCESS to + indicate success or MQTT_ERR_NO_CONN if the client is not currently + connected. mid is the message ID for the publish request. The mid + value can be used to track the publish request by checking against the + mid argument in the on_publish() callback if it is defined. + + A ValueError will be raised if topic is None, has zero length or is + invalid (contains a wildcard), if qos is not one of 0, 1 or 2, or if + the length of the payload is greater than 268435455 bytes.""" + if topic is None or len(topic) == 0: + raise ValueError('Invalid topic.') + if qos<0 or qos>2: + raise ValueError('Invalid QoS level.') + if isinstance(payload, str) or isinstance(payload, bytearray): + local_payload = payload + elif sys.version_info[0] < 3 and isinstance(payload, unicode): + local_payload = payload + elif isinstance(payload, int) or isinstance(payload, float): + local_payload = str(payload) + elif payload is None: + local_payload = None + else: + raise TypeError('payload must be a string, bytearray, int, float or None.') + + if local_payload is not None and len(local_payload) > 268435455: + raise ValueError('Payload too large.') + + if self._topic_wildcard_len_check(topic) != MQTT_ERR_SUCCESS: + raise ValueError('Publish topic cannot contain wildcards.') + + local_mid = self._mid_generate() + + if qos == 0: + rc = self._send_publish(local_mid, topic, local_payload, qos, retain, False) + return (rc, local_mid) + else: + message = MQTTMessage() + message.timestamp = time.time() + + message.mid = local_mid + message.topic = topic + if local_payload is None or len(local_payload) == 0: + message.payload = None + else: + message.payload = local_payload + + message.qos = qos + message.retain = retain + message.dup = False + + self._out_message_mutex.acquire() + self._out_messages.append(message) + if self._max_inflight_messages == 0 or self._inflight_messages < self._max_inflight_messages: + self._inflight_messages = self._inflight_messages+1 + if qos == 1: + message.state = mqtt_ms_wait_for_puback + elif qos == 2: + message.state = mqtt_ms_wait_for_pubrec + self._out_message_mutex.release() + + rc = self._send_publish(message.mid, message.topic, message.payload, message.qos, message.retain, message.dup) + + # remove from inflight messages so it will be send after a connection is made + if rc is MQTT_ERR_NO_CONN: + with self._out_message_mutex: + self._inflight_messages -= 1 + message.state = mqtt_ms_publish + + return (rc, local_mid) + else: + message.state = mqtt_ms_queued; + self._out_message_mutex.release() + return (MQTT_ERR_SUCCESS, local_mid) + + def username_pw_set(self, username, password=None): + """Set a username and optionally a password for broker authentication. + + Must be called before connect() to have any effect. + Requires a broker that supports MQTT v3.1. + + username: The username to authenticate with. Need have no relationship to the client id. + password: The password to authenticate with. Optional, set to None if not required. + """ + self._username = username.encode('utf-8') + self._password = password + + def socket_factory_set(self, socket_factory): + """Set a socket factory to custom configure a different socket type for + mqtt connection. + Must be called before connect() to have any effect. + socket_factory: create_connection function which creates a socket to user's specification + """ + self._socket_factory = socket_factory + + def disconnect(self): + """Disconnect a connected client from the broker.""" + self._state_mutex.acquire() + self._state = mqtt_cs_disconnecting + self._state_mutex.release() + + self._backoffCore.stopStableConnectionTimer() + + if self._sock is None and self._ssl is None: + return MQTT_ERR_NO_CONN + + return self._send_disconnect() + + def subscribe(self, topic, qos=0): + """Subscribe the client to one or more topics. + + This function may be called in three different ways: + + Simple string and integer + ------------------------- + e.g. subscribe("my/topic", 2) + + topic: A string specifying the subscription topic to subscribe to. + qos: The desired quality of service level for the subscription. + Defaults to 0. + + String and integer tuple + ------------------------ + e.g. subscribe(("my/topic", 1)) + + topic: A tuple of (topic, qos). Both topic and qos must be present in + the tuple. + qos: Not used. + + List of string and integer tuples + ------------------------ + e.g. subscribe([("my/topic", 0), ("another/topic", 2)]) + + This allows multiple topic subscriptions in a single SUBSCRIPTION + command, which is more efficient than using multiple calls to + subscribe(). + + topic: A list of tuple of format (topic, qos). Both topic and qos must + be present in all of the tuples. + qos: Not used. + + The function returns a tuple (result, mid), where result is + MQTT_ERR_SUCCESS to indicate success or (MQTT_ERR_NO_CONN, None) if the + client is not currently connected. mid is the message ID for the + subscribe request. The mid value can be used to track the subscribe + request by checking against the mid argument in the on_subscribe() + callback if it is defined. + + Raises a ValueError if qos is not 0, 1 or 2, or if topic is None or has + zero string length, or if topic is not a string, tuple or list. + """ + topic_qos_list = None + if isinstance(topic, str): + if qos<0 or qos>2: + raise ValueError('Invalid QoS level.') + if topic is None or len(topic) == 0: + raise ValueError('Invalid topic.') + topic_qos_list = [(topic.encode('utf-8'), qos)] + elif isinstance(topic, tuple): + if topic[1]<0 or topic[1]>2: + raise ValueError('Invalid QoS level.') + if topic[0] is None or len(topic[0]) == 0 or not isinstance(topic[0], str): + raise ValueError('Invalid topic.') + topic_qos_list = [(topic[0].encode('utf-8'), topic[1])] + elif isinstance(topic, list): + topic_qos_list = [] + for t in topic: + if t[1]<0 or t[1]>2: + raise ValueError('Invalid QoS level.') + if t[0] is None or len(t[0]) == 0 or not isinstance(t[0], str): + raise ValueError('Invalid topic.') + topic_qos_list.append((t[0].encode('utf-8'), t[1])) + + if topic_qos_list is None: + raise ValueError("No topic specified, or incorrect topic type.") + + if self._sock is None and self._ssl is None: + return (MQTT_ERR_NO_CONN, None) + + return self._send_subscribe(False, topic_qos_list) + + def unsubscribe(self, topic): + """Unsubscribe the client from one or more topics. + + topic: A single string, or list of strings that are the subscription + topics to unsubscribe from. + + Returns a tuple (result, mid), where result is MQTT_ERR_SUCCESS + to indicate success or (MQTT_ERR_NO_CONN, None) if the client is not + currently connected. + mid is the message ID for the unsubscribe request. The mid value can be + used to track the unsubscribe request by checking against the mid + argument in the on_unsubscribe() callback if it is defined. + + Raises a ValueError if topic is None or has zero string length, or is + not a string or list. + """ + topic_list = None + if topic is None: + raise ValueError('Invalid topic.') + if isinstance(topic, str): + if len(topic) == 0: + raise ValueError('Invalid topic.') + topic_list = [topic.encode('utf-8')] + elif isinstance(topic, list): + topic_list = [] + for t in topic: + if len(t) == 0 or not isinstance(t, str): + raise ValueError('Invalid topic.') + topic_list.append(t.encode('utf-8')) + + if topic_list is None: + raise ValueError("No topic specified, or incorrect topic type.") + + if self._sock is None and self._ssl is None: + return (MQTT_ERR_NO_CONN, None) + + return self._send_unsubscribe(False, topic_list) + + def loop_read(self, max_packets=1): + """Process read network events. Use in place of calling loop() if you + wish to handle your client reads as part of your own application. + + Use socket() to obtain the client socket to call select() or equivalent + on. + + Do not use if you are using the threaded interface loop_start().""" + if self._sock is None and self._ssl is None: + return MQTT_ERR_NO_CONN + + max_packets = len(self._out_messages) + len(self._in_messages) + if max_packets < 1: + max_packets = 1 + + for i in range(0, max_packets): + rc = self._packet_read() + if rc > 0: + return self._loop_rc_handle(rc) + elif rc == MQTT_ERR_AGAIN: + return MQTT_ERR_SUCCESS + return MQTT_ERR_SUCCESS + + def loop_write(self, max_packets=1): + """Process read network events. Use in place of calling loop() if you + wish to handle your client reads as part of your own application. + + Use socket() to obtain the client socket to call select() or equivalent + on. + + Use want_write() to determine if there is data waiting to be written. + + Do not use if you are using the threaded interface loop_start().""" + + if self._sock is None and self._ssl is None: + return MQTT_ERR_NO_CONN + + max_packets = len(self._out_packet) + 1 + if max_packets < 1: + max_packets = 1 + + for i in range(0, max_packets): + rc = self._packet_write() + if rc > 0: + return self._loop_rc_handle(rc) + elif rc == MQTT_ERR_AGAIN: + return MQTT_ERR_SUCCESS + return MQTT_ERR_SUCCESS + + def want_write(self): + """Call to determine if there is network data waiting to be written. + Useful if you are calling select() yourself rather than using loop(). + """ + if self._current_out_packet or len(self._out_packet) > 0: + return True + else: + return False + + def loop_misc(self): + """Process miscellaneous network events. Use in place of calling loop() if you + wish to call select() or equivalent on. + + Do not use if you are using the threaded interface loop_start().""" + if self._sock is None and self._ssl is None: + return MQTT_ERR_NO_CONN + + now = time.time() + self._check_keepalive() + if self._last_retry_check+1 < now: + # Only check once a second at most + self._message_retry_check() + self._last_retry_check = now + + if self._ping_t > 0 and now - self._ping_t >= self._keepalive: + # client->ping_t != 0 means we are waiting for a pingresp. + # This hasn't happened in the keepalive time so we should disconnect. + if self._ssl: + self._ssl.close() + self._ssl = None + elif self._sock: + self._sock.close() + self._sock = None + + self._callback_mutex.acquire() + if self._state == mqtt_cs_disconnecting: + rc = MQTT_ERR_SUCCESS + else: + rc = 1 + if self.on_disconnect: + self._in_callback = True + self.on_disconnect(self, self._userdata, rc) + self._in_callback = False + self._callback_mutex.release() + return MQTT_ERR_CONN_LOST + + return MQTT_ERR_SUCCESS + + def max_inflight_messages_set(self, inflight): + """Set the maximum number of messages with QoS>0 that can be part way + through their network flow at once. Defaults to 20.""" + if inflight < 0: + raise ValueError('Invalid inflight.') + self._max_inflight_messages = inflight + + def message_retry_set(self, retry): + """Set the timeout in seconds before a message with QoS>0 is retried. + 20 seconds by default.""" + if retry < 0: + raise ValueError('Invalid retry.') + + self._message_retry = retry + + def user_data_set(self, userdata): + """Set the user data variable passed to callbacks. May be any data type.""" + self._userdata = userdata + + def will_set(self, topic, payload=None, qos=0, retain=False): + """Set a Will to be sent by the broker in case the client disconnects unexpectedly. + + This must be called before connect() to have any effect. + + topic: The topic that the will message should be published on. + payload: The message to send as a will. If not given, or set to None a + zero length message will be used as the will. Passing an int or float + will result in the payload being converted to a string representing + that number. If you wish to send a true int/float, use struct.pack() to + create the payload you require. + qos: The quality of service level to use for the will. + retain: If set to true, the will message will be set as the "last known + good"/retained message for the topic. + + Raises a ValueError if qos is not 0, 1 or 2, or if topic is None or has + zero string length. + """ + if topic is None or len(topic) == 0: + raise ValueError('Invalid topic.') + if qos<0 or qos>2: + raise ValueError('Invalid QoS level.') + if isinstance(payload, str): + self._will_payload = payload.encode('utf-8') + elif isinstance(payload, bytearray): + self._will_payload = payload + elif isinstance(payload, int) or isinstance(payload, float): + self._will_payload = str(payload) + elif payload is None: + self._will_payload = None + else: + raise TypeError('payload must be a string, bytearray, int, float or None.') + + self._will = True + self._will_topic = topic.encode('utf-8') + self._will_qos = qos + self._will_retain = retain + + def will_clear(self): + """ Removes a will that was previously configured with will_set(). + + Must be called before connect() to have any effect.""" + self._will = False + self._will_topic = "" + self._will_payload = None + self._will_qos = 0 + self._will_retain = False + + def socket(self): + """Return the socket or ssl object for this client.""" + if self._ssl: + if self._useSecuredWebsocket: + return self._ssl.getSSLSocket() + else: + return self._ssl + else: + return self._sock + + def loop_forever(self, timeout=1.0, max_packets=1, retry_first_connection=False): + """This function call loop() for you in an infinite blocking loop. It + is useful for the case where you only want to run the MQTT client loop + in your program. + + loop_forever() will handle reconnecting for you. If you call + disconnect() in a callback it will return. + + + timeout: The time in seconds to wait for incoming/outgoing network + traffic before timing out and returning. + max_packets: Not currently used. + retry_first_connection: Should the first connection attempt be retried on failure. + + Raises socket.error on first connection failures unless retry_first_connection=True + """ + + run = True + + while run: + if self._state == mqtt_cs_connect_async: + try: + self.reconnect() + except socket.error: + if not retry_first_connection: + raise + self._easy_log(MQTT_LOG_DEBUG, "Connection failed, retrying") + self._backoffCore.backOff() + # time.sleep(1) + else: + break + + while run: + rc = MQTT_ERR_SUCCESS + while rc == MQTT_ERR_SUCCESS: + rc = self.loop(timeout, max_packets) + # We don't need to worry about locking here, because we've + # either called loop_forever() when in single threaded mode, or + # in multi threaded mode when loop_stop() has been called and + # so no other threads can access _current_out_packet, + # _out_packet or _messages. + if (self._thread_terminate is True + and self._current_out_packet is None + and len(self._out_packet) == 0 + and len(self._out_messages) == 0): + + rc = 1 + run = False + + self._state_mutex.acquire() + if self._state == mqtt_cs_disconnecting or run is False or self._thread_terminate is True: + run = False + self._state_mutex.release() + else: + self._state_mutex.release() + self._backoffCore.backOff() + # time.sleep(1) + + self._state_mutex.acquire() + if self._state == mqtt_cs_disconnecting or run is False or self._thread_terminate is True: + run = False + self._state_mutex.release() + else: + self._state_mutex.release() + try: + self.reconnect() + except socket.error as err: + pass + + return rc + + def loop_start(self): + """This is part of the threaded client interface. Call this once to + start a new thread to process network traffic. This provides an + alternative to repeatedly calling loop() yourself. + """ + if self._thread is not None: + return MQTT_ERR_INVAL + + self._thread_terminate = False + self._thread = threading.Thread(target=self._thread_main) + self._thread.daemon = True + self._thread.start() + + def loop_stop(self, force=False): + """This is part of the threaded client interface. Call this once to + stop the network thread previously created with loop_start(). This call + will block until the network thread finishes. + + The force parameter is currently ignored. + """ + if self._thread is None: + return MQTT_ERR_INVAL + + self._thread_terminate = True + self._thread.join() + self._thread = None + + def message_callback_add(self, sub, callback): + """Register a message callback for a specific topic. + Messages that match 'sub' will be passed to 'callback'. Any + non-matching messages will be passed to the default on_message + callback. + + Call multiple times with different 'sub' to define multiple topic + specific callbacks. + + Topic specific callbacks may be removed with + message_callback_remove().""" + if callback is None or sub is None: + raise ValueError("sub and callback must both be defined.") + + self._callback_mutex.acquire() + for i in range(0, len(self.on_message_filtered)): + if self.on_message_filtered[i][0] == sub: + self.on_message_filtered[i] = (sub, callback) + self._callback_mutex.release() + return + + self.on_message_filtered.append((sub, callback)) + self._callback_mutex.release() + + def message_callback_remove(self, sub): + """Remove a message callback previously registered with + message_callback_add().""" + if sub is None: + raise ValueError("sub must defined.") + + self._callback_mutex.acquire() + for i in range(0, len(self.on_message_filtered)): + if self.on_message_filtered[i][0] == sub: + self.on_message_filtered.pop(i) + self._callback_mutex.release() + return + self._callback_mutex.release() + + # ============================================================ + # Private functions + # ============================================================ + + def _loop_rc_handle(self, rc): + if rc: + if self._ssl: + self._ssl.close() + self._ssl = None + elif self._sock: + self._sock.close() + self._sock = None + + self._state_mutex.acquire() + if self._state == mqtt_cs_disconnecting: + rc = MQTT_ERR_SUCCESS + self._state_mutex.release() + self._callback_mutex.acquire() + if self.on_disconnect: + self._in_callback = True + self.on_disconnect(self, self._userdata, rc) + self._in_callback = False + + self._callback_mutex.release() + return rc + + def _packet_read(self): + # This gets called if pselect() indicates that there is network data + # available - ie. at least one byte. What we do depends on what data we + # already have. + # If we've not got a command, attempt to read one and save it. This should + # always work because it's only a single byte. + # Then try to read the remaining length. This may fail because it is may + # be more than one byte - will need to save data pending next read if it + # does fail. + # Then try to read the remaining payload, where 'payload' here means the + # combined variable header and actual payload. This is the most likely to + # fail due to longer length, so save current data and current position. + # After all data is read, send to _mqtt_handle_packet() to deal with. + # Finally, free the memory and reset everything to starting conditions. + if self._in_packet['command'] == 0: + try: + if self._ssl: + command = self._ssl.read(1) + else: + command = self._sock.recv(1) + except socket.error as err: + if self._ssl and (err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE): + return MQTT_ERR_AGAIN + if err.errno == EAGAIN: + return MQTT_ERR_AGAIN + print(err) + return 1 + else: + if len(command) == 0: + return 1 + command = struct.unpack("!B", command) + self._in_packet['command'] = command[0] + + if self._in_packet['have_remaining'] == 0: + # Read remaining + # Algorithm for decoding taken from pseudo code at + # http://publib.boulder.ibm.com/infocenter/wmbhelp/v6r0m0/topic/com.ibm.etools.mft.doc/ac10870_.htm + while True: + try: + if self._ssl: + byte = self._ssl.read(1) + else: + byte = self._sock.recv(1) + except socket.error as err: + if self._ssl and (err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE): + return MQTT_ERR_AGAIN + if err.errno == EAGAIN: + return MQTT_ERR_AGAIN + print(err) + return 1 + else: + byte = struct.unpack("!B", byte) + byte = byte[0] + self._in_packet['remaining_count'].append(byte) + # Max 4 bytes length for remaining length as defined by protocol. + # Anything more likely means a broken/malicious client. + if len(self._in_packet['remaining_count']) > 4: + return MQTT_ERR_PROTOCOL + + self._in_packet['remaining_length'] = self._in_packet['remaining_length'] + (byte & 127)*self._in_packet['remaining_mult'] + self._in_packet['remaining_mult'] = self._in_packet['remaining_mult'] * 128 + + if (byte & 128) == 0: + break + + self._in_packet['have_remaining'] = 1 + self._in_packet['to_process'] = self._in_packet['remaining_length'] + + while self._in_packet['to_process'] > 0: + try: + if self._ssl: + data = self._ssl.read(self._in_packet['to_process']) + else: + data = self._sock.recv(self._in_packet['to_process']) + except socket.error as err: + if self._ssl and (err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE): + return MQTT_ERR_AGAIN + if err.errno == EAGAIN: + return MQTT_ERR_AGAIN + print(err) + return 1 + else: + self._in_packet['to_process'] = self._in_packet['to_process'] - len(data) + self._in_packet['packet'] = self._in_packet['packet'] + data + + # All data for this packet is read. + self._in_packet['pos'] = 0 + rc = self._packet_handle() + + # Free data and reset values + self._in_packet = dict( + command=0, + have_remaining=0, + remaining_count=[], + remaining_mult=1, + remaining_length=0, + packet=b"", + to_process=0, + pos=0) + + self._msgtime_mutex.acquire() + self._last_msg_in = time.time() + self._msgtime_mutex.release() + return rc + + def _packet_write(self): + self._current_out_packet_mutex.acquire() + while self._current_out_packet: + packet = self._current_out_packet + + try: + if self._ssl: + write_length = self._ssl.write(packet['packet'][packet['pos']:]) + else: + write_length = self._sock.send(packet['packet'][packet['pos']:]) + except AttributeError: + self._current_out_packet_mutex.release() + return MQTT_ERR_SUCCESS + except socket.error as err: + self._current_out_packet_mutex.release() + if self._ssl and (err.errno == ssl.SSL_ERROR_WANT_READ or err.errno == ssl.SSL_ERROR_WANT_WRITE): + return MQTT_ERR_AGAIN + if err.errno == EAGAIN: + return MQTT_ERR_AGAIN + print(err) + return 1 + + if write_length > 0: + packet['to_process'] = packet['to_process'] - write_length + packet['pos'] = packet['pos'] + write_length + + if packet['to_process'] == 0: + if (packet['command'] & 0xF0) == PUBLISH and packet['qos'] == 0: + self._callback_mutex.acquire() + if self.on_publish: + self._in_callback = True + self.on_publish(self, self._userdata, packet['mid']) + self._in_callback = False + + self._callback_mutex.release() + + if (packet['command'] & 0xF0) == DISCONNECT: + self._current_out_packet_mutex.release() + + self._msgtime_mutex.acquire() + self._last_msg_out = time.time() + self._msgtime_mutex.release() + + self._callback_mutex.acquire() + if self.on_disconnect: + self._in_callback = True + self.on_disconnect(self, self._userdata, 0) + self._in_callback = False + self._callback_mutex.release() + + if self._ssl: + self._ssl.close() + self._ssl = None + if self._sock: + self._sock.close() + self._sock = None + return MQTT_ERR_SUCCESS + + self._out_packet_mutex.acquire() + if len(self._out_packet) > 0: + self._current_out_packet = self._out_packet.pop(0) + else: + self._current_out_packet = None + self._out_packet_mutex.release() + else: + pass # FIXME + + self._current_out_packet_mutex.release() + + self._msgtime_mutex.acquire() + self._last_msg_out = time.time() + self._msgtime_mutex.release() + return MQTT_ERR_SUCCESS + + def _easy_log(self, level, buf): + if self.on_log: + self.on_log(self, self._userdata, level, buf) + + def _check_keepalive(self): + now = time.time() + self._msgtime_mutex.acquire() + last_msg_out = self._last_msg_out + last_msg_in = self._last_msg_in + self._msgtime_mutex.release() + if (self._sock is not None or self._ssl is not None) and (now - last_msg_out >= self._keepalive or now - last_msg_in >= self._keepalive): + if self._state == mqtt_cs_connected and self._ping_t == 0: + self._send_pingreq() + self._msgtime_mutex.acquire() + self._last_msg_out = now + self._last_msg_in = now + self._msgtime_mutex.release() + else: + if self._ssl: + self._ssl.close() + self._ssl = None + elif self._sock: + self._sock.close() + self._sock = None + + if self._state == mqtt_cs_disconnecting: + rc = MQTT_ERR_SUCCESS + else: + rc = 1 + self._callback_mutex.acquire() + if self.on_disconnect: + self._in_callback = True + self.on_disconnect(self, self._userdata, rc) + self._in_callback = False + self._callback_mutex.release() + + def _mid_generate(self): + self._last_mid = self._last_mid + 1 + if self._last_mid == 65536: + self._last_mid = 1 + return self._last_mid + + def _topic_wildcard_len_check(self, topic): + # Search for + or # in a topic. Return MQTT_ERR_INVAL if found. + # Also returns MQTT_ERR_INVAL if the topic string is too long. + # Returns MQTT_ERR_SUCCESS if everything is fine. + if '+' in topic or '#' in topic or len(topic) == 0 or len(topic) > 65535: + return MQTT_ERR_INVAL + else: + return MQTT_ERR_SUCCESS + + def _send_pingreq(self): + self._easy_log(MQTT_LOG_DEBUG, "Sending PINGREQ") + rc = self._send_simple_command(PINGREQ) + if rc == MQTT_ERR_SUCCESS: + self._ping_t = time.time() + return rc + + def _send_pingresp(self): + self._easy_log(MQTT_LOG_DEBUG, "Sending PINGRESP") + return self._send_simple_command(PINGRESP) + + def _send_puback(self, mid): + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBACK (Mid: "+str(mid)+")") + return self._send_command_with_mid(PUBACK, mid, False) + + def _send_pubcomp(self, mid): + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBCOMP (Mid: "+str(mid)+")") + return self._send_command_with_mid(PUBCOMP, mid, False) + + def _pack_remaining_length(self, packet, remaining_length): + remaining_bytes = [] + while True: + byte = remaining_length % 128 + remaining_length = remaining_length // 128 + # If there are more digits to encode, set the top bit of this digit + if remaining_length > 0: + byte = byte | 0x80 + + remaining_bytes.append(byte) + packet.extend(struct.pack("!B", byte)) + if remaining_length == 0: + # FIXME - this doesn't deal with incorrectly large payloads + return packet + + def _pack_str16(self, packet, data): + if sys.version_info[0] < 3: + if isinstance(data, bytearray): + packet.extend(struct.pack("!H", len(data))) + packet.extend(data) + elif isinstance(data, str): + udata = data.encode('utf-8') + pack_format = "!H" + str(len(udata)) + "s" + packet.extend(struct.pack(pack_format, len(udata), udata)) + elif isinstance(data, unicode): + udata = data.encode('utf-8') + pack_format = "!H" + str(len(udata)) + "s" + packet.extend(struct.pack(pack_format, len(udata), udata)) + else: + raise TypeError + else: + if isinstance(data, bytearray) or isinstance(data, bytes): + packet.extend(struct.pack("!H", len(data))) + packet.extend(data) + elif isinstance(data, str): + udata = data.encode('utf-8') + pack_format = "!H" + str(len(udata)) + "s" + packet.extend(struct.pack(pack_format, len(udata), udata)) + else: + raise TypeError + + def _send_publish(self, mid, topic, payload=None, qos=0, retain=False, dup=False): + if self._sock is None and self._ssl is None: + return MQTT_ERR_NO_CONN + + utopic = topic.encode('utf-8') + command = PUBLISH | ((dup&0x1)<<3) | (qos<<1) | retain + packet = bytearray() + packet.extend(struct.pack("!B", command)) + if payload is None: + remaining_length = 2+len(utopic) + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBLISH (d"+str(dup)+", q"+str(qos)+", r"+str(int(retain))+", m"+str(mid)+", '"+topic+"' (NULL payload)") + else: + if isinstance(payload, str): + upayload = payload.encode('utf-8') + payloadlen = len(upayload) + elif isinstance(payload, bytearray): + payloadlen = len(payload) + elif isinstance(payload, unicode): + upayload = payload.encode('utf-8') + payloadlen = len(upayload) + + remaining_length = 2+len(utopic) + payloadlen + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBLISH (d"+str(dup)+", q"+str(qos)+", r"+str(int(retain))+", m"+str(mid)+", '"+topic+"', ... ("+str(payloadlen)+" bytes)") + + if qos > 0: + # For message id + remaining_length = remaining_length + 2 + + self._pack_remaining_length(packet, remaining_length) + self._pack_str16(packet, topic) + + if qos > 0: + # For message id + packet.extend(struct.pack("!H", mid)) + + if payload is not None: + if isinstance(payload, str): + pack_format = str(payloadlen) + "s" + packet.extend(struct.pack(pack_format, upayload)) + elif isinstance(payload, bytearray): + packet.extend(payload) + elif isinstance(payload, unicode): + pack_format = str(payloadlen) + "s" + packet.extend(struct.pack(pack_format, upayload)) + else: + raise TypeError('payload must be a string, unicode or a bytearray.') + + return self._packet_queue(PUBLISH, packet, mid, qos) + + def _send_pubrec(self, mid): + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBREC (Mid: "+str(mid)+")") + return self._send_command_with_mid(PUBREC, mid, False) + + def _send_pubrel(self, mid, dup=False): + self._easy_log(MQTT_LOG_DEBUG, "Sending PUBREL (Mid: "+str(mid)+")") + return self._send_command_with_mid(PUBREL|2, mid, dup) + + def _send_command_with_mid(self, command, mid, dup): + # For PUBACK, PUBCOMP, PUBREC, and PUBREL + if dup: + command = command | 8 + + remaining_length = 2 + packet = struct.pack('!BBH', command, remaining_length, mid) + return self._packet_queue(command, packet, mid, 1) + + def _send_simple_command(self, command): + # For DISCONNECT, PINGREQ and PINGRESP + remaining_length = 0 + packet = struct.pack('!BB', command, remaining_length) + return self._packet_queue(command, packet, 0, 0) + + def _send_connect(self, keepalive, clean_session): + if self._protocol == MQTTv31: + protocol = PROTOCOL_NAMEv31 + proto_ver = 3 + else: + protocol = PROTOCOL_NAMEv311 + proto_ver = 4 + remaining_length = 2+len(protocol) + 1+1+2 + 2+len(self._client_id) + connect_flags = 0 + if clean_session: + connect_flags = connect_flags | 0x02 + + if self._will: + if self._will_payload is not None: + remaining_length = remaining_length + 2+len(self._will_topic) + 2+len(self._will_payload) + else: + remaining_length = remaining_length + 2+len(self._will_topic) + 2 + + connect_flags = connect_flags | 0x04 | ((self._will_qos&0x03) << 3) | ((self._will_retain&0x01) << 5) + + if self._username: + remaining_length = remaining_length + 2+len(self._username) + connect_flags = connect_flags | 0x80 + if self._password: + connect_flags = connect_flags | 0x40 + remaining_length = remaining_length + 2+len(self._password) + + command = CONNECT + packet = bytearray() + packet.extend(struct.pack("!B", command)) + + self._pack_remaining_length(packet, remaining_length) + packet.extend(struct.pack("!H"+str(len(protocol))+"sBBH", len(protocol), protocol, proto_ver, connect_flags, keepalive)) + + self._pack_str16(packet, self._client_id) + + if self._will: + self._pack_str16(packet, self._will_topic) + if self._will_payload is None or len(self._will_payload) == 0: + packet.extend(struct.pack("!H", 0)) + else: + self._pack_str16(packet, self._will_payload) + + if self._username: + self._pack_str16(packet, self._username) + + if self._password: + self._pack_str16(packet, self._password) + + self._keepalive = keepalive + return self._packet_queue(command, packet, 0, 0) + + def _send_disconnect(self): + return self._send_simple_command(DISCONNECT) + + def _send_subscribe(self, dup, topics): + remaining_length = 2 + for t in topics: + remaining_length = remaining_length + 2+len(t[0])+1 + + command = SUBSCRIBE | (dup<<3) | (1<<1) + packet = bytearray() + packet.extend(struct.pack("!B", command)) + self._pack_remaining_length(packet, remaining_length) + local_mid = self._mid_generate() + packet.extend(struct.pack("!H", local_mid)) + for t in topics: + self._pack_str16(packet, t[0]) + packet.extend(struct.pack("B", t[1])) + return (self._packet_queue(command, packet, local_mid, 1), local_mid) + + def _send_unsubscribe(self, dup, topics): + remaining_length = 2 + for t in topics: + remaining_length = remaining_length + 2+len(t) + + command = UNSUBSCRIBE | (dup<<3) | (1<<1) + packet = bytearray() + packet.extend(struct.pack("!B", command)) + self._pack_remaining_length(packet, remaining_length) + local_mid = self._mid_generate() + packet.extend(struct.pack("!H", local_mid)) + for t in topics: + self._pack_str16(packet, t) + return (self._packet_queue(command, packet, local_mid, 1), local_mid) + + def _message_retry_check_actual(self, messages, mutex): + mutex.acquire() + now = time.time() + for m in messages: + if m.timestamp + self._message_retry < now: + if m.state == mqtt_ms_wait_for_puback or m.state == mqtt_ms_wait_for_pubrec: + m.timestamp = now + m.dup = True + self._send_publish(m.mid, m.topic, m.payload, m.qos, m.retain, m.dup) + elif m.state == mqtt_ms_wait_for_pubrel: + m.timestamp = now + m.dup = True + self._send_pubrec(m.mid) + elif m.state == mqtt_ms_wait_for_pubcomp: + m.timestamp = now + m.dup = True + self._send_pubrel(m.mid, True) + mutex.release() + + def _message_retry_check(self): + self._message_retry_check_actual(self._out_messages, self._out_message_mutex) + self._message_retry_check_actual(self._in_messages, self._in_message_mutex) + + def _messages_reconnect_reset_out(self): + self._out_message_mutex.acquire() + self._inflight_messages = 0 + for m in self._out_messages: + m.timestamp = 0 + if self._max_inflight_messages == 0 or self._inflight_messages < self._max_inflight_messages: + if m.qos == 0: + m.state = mqtt_ms_publish + elif m.qos == 1: + #self._inflight_messages = self._inflight_messages + 1 + if m.state == mqtt_ms_wait_for_puback: + m.dup = True + m.state = mqtt_ms_publish + elif m.qos == 2: + #self._inflight_messages = self._inflight_messages + 1 + if m.state == mqtt_ms_wait_for_pubcomp: + m.state = mqtt_ms_resend_pubrel + m.dup = True + else: + if m.state == mqtt_ms_wait_for_pubrec: + m.dup = True + m.state = mqtt_ms_publish + else: + m.state = mqtt_ms_queued + self._out_message_mutex.release() + + def _messages_reconnect_reset_in(self): + self._in_message_mutex.acquire() + for m in self._in_messages: + m.timestamp = 0 + if m.qos != 2: + self._in_messages.pop(self._in_messages.index(m)) + else: + # Preserve current state + pass + self._in_message_mutex.release() + + def _messages_reconnect_reset(self): + self._messages_reconnect_reset_out() + self._messages_reconnect_reset_in() + + def _packet_queue(self, command, packet, mid, qos): + mpkt = dict( + command = command, + mid = mid, + qos = qos, + pos = 0, + to_process = len(packet), + packet = packet) + + self._out_packet_mutex.acquire() + self._out_packet.append(mpkt) + if self._current_out_packet_mutex.acquire(False): + if self._current_out_packet is None and len(self._out_packet) > 0: + self._current_out_packet = self._out_packet.pop(0) + self._current_out_packet_mutex.release() + self._out_packet_mutex.release() + + # Write a single byte to sockpairW (connected to sockpairR) to break + # out of select() if in threaded mode. + try: + self._sockpairW.send(sockpair_data) + except socket.error as err: + if err.errno != EAGAIN: + raise + + if not self._in_callback and self._thread is None: + return self.loop_write() + else: + return MQTT_ERR_SUCCESS + + def _packet_handle(self): + cmd = self._in_packet['command']&0xF0 + if cmd == PINGREQ: + return self._handle_pingreq() + elif cmd == PINGRESP: + return self._handle_pingresp() + elif cmd == PUBACK: + return self._handle_pubackcomp("PUBACK") + elif cmd == PUBCOMP: + return self._handle_pubackcomp("PUBCOMP") + elif cmd == PUBLISH: + return self._handle_publish() + elif cmd == PUBREC: + return self._handle_pubrec() + elif cmd == PUBREL: + return self._handle_pubrel() + elif cmd == CONNACK: + return self._handle_connack() + elif cmd == SUBACK: + return self._handle_suback() + elif cmd == UNSUBACK: + return self._handle_unsuback() + else: + # If we don't recognise the command, return an error straight away. + self._easy_log(MQTT_LOG_ERR, "Error: Unrecognised command "+str(cmd)) + return MQTT_ERR_PROTOCOL + + def _handle_pingreq(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 0: + return MQTT_ERR_PROTOCOL + + self._easy_log(MQTT_LOG_DEBUG, "Received PINGREQ") + return self._send_pingresp() + + def _handle_pingresp(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 0: + return MQTT_ERR_PROTOCOL + + # No longer waiting for a PINGRESP. + self._ping_t = 0 + self._easy_log(MQTT_LOG_DEBUG, "Received PINGRESP") + return MQTT_ERR_SUCCESS + + def _handle_connack(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 2: + return MQTT_ERR_PROTOCOL + + if len(self._in_packet['packet']) != 2: + return MQTT_ERR_PROTOCOL + + (flags, result) = struct.unpack("!BB", self._in_packet['packet']) + if result == CONNACK_REFUSED_PROTOCOL_VERSION and self._protocol == MQTTv311: + self._easy_log(MQTT_LOG_DEBUG, "Received CONNACK ("+str(flags)+", "+str(result)+"), attempting downgrade to MQTT v3.1.") + # Downgrade to MQTT v3.1 + self._protocol = MQTTv31 + return self.reconnect() + + if result == 0: + self._state = mqtt_cs_connected + + self._easy_log(MQTT_LOG_DEBUG, "Received CONNACK ("+str(flags)+", "+str(result)+")") + self._callback_mutex.acquire() + if self.on_connect: + self._in_callback = True + + if sys.version_info[0] < 3: + argcount = self.on_connect.func_code.co_argcount + else: + argcount = self.on_connect.__code__.co_argcount + + if argcount == 3: + self.on_connect(self, self._userdata, result) + else: + flags_dict = dict() + flags_dict['session present'] = flags & 0x01 + self.on_connect(self, self._userdata, flags_dict, result) + self._in_callback = False + self._callback_mutex.release() + + # Start counting for stable connection + self._backoffCore.startStableConnectionTimer() + + if result == 0: + rc = 0 + self._out_message_mutex.acquire() + for m in self._out_messages: + m.timestamp = time.time() + if m.state == mqtt_ms_queued: + self.loop_write() # Process outgoing messages that have just been queued up + self._out_message_mutex.release() + return MQTT_ERR_SUCCESS + + if m.qos == 0: + self._in_callback = True # Don't call loop_write after _send_publish() + rc = self._send_publish(m.mid, m.topic, m.payload, m.qos, m.retain, m.dup) + self._in_callback = False + if rc != 0: + self._out_message_mutex.release() + return rc + elif m.qos == 1: + if m.state == mqtt_ms_publish: + self._inflight_messages = self._inflight_messages + 1 + m.state = mqtt_ms_wait_for_puback + self._in_callback = True # Don't call loop_write after _send_publish() + rc = self._send_publish(m.mid, m.topic, m.payload, m.qos, m.retain, m.dup) + self._in_callback = False + if rc != 0: + self._out_message_mutex.release() + return rc + elif m.qos == 2: + if m.state == mqtt_ms_publish: + self._inflight_messages = self._inflight_messages + 1 + m.state = mqtt_ms_wait_for_pubrec + self._in_callback = True # Don't call loop_write after _send_publish() + rc = self._send_publish(m.mid, m.topic, m.payload, m.qos, m.retain, m.dup) + self._in_callback = False + if rc != 0: + self._out_message_mutex.release() + return rc + elif m.state == mqtt_ms_resend_pubrel: + self._inflight_messages = self._inflight_messages + 1 + m.state = mqtt_ms_wait_for_pubcomp + self._in_callback = True # Don't call loop_write after _send_pubrel() + rc = self._send_pubrel(m.mid, m.dup) + self._in_callback = False + if rc != 0: + self._out_message_mutex.release() + return rc + self.loop_write() # Process outgoing messages that have just been queued up + self._out_message_mutex.release() + return rc + elif result > 0 and result < 6: + return MQTT_ERR_CONN_REFUSED + else: + return MQTT_ERR_PROTOCOL + + def _handle_suback(self): + self._easy_log(MQTT_LOG_DEBUG, "Received SUBACK") + pack_format = "!H" + str(len(self._in_packet['packet'])-2) + 's' + (mid, packet) = struct.unpack(pack_format, self._in_packet['packet']) + pack_format = "!" + "B"*len(packet) + granted_qos = struct.unpack(pack_format, packet) + + self._callback_mutex.acquire() + if self.on_subscribe: + self._in_callback = True + self.on_subscribe(self, self._userdata, mid, granted_qos) + self._in_callback = False + self._callback_mutex.release() + + return MQTT_ERR_SUCCESS + + def _handle_publish(self): + rc = 0 + + header = self._in_packet['command'] + message = MQTTMessage() + message.dup = (header & 0x08)>>3 + message.qos = (header & 0x06)>>1 + message.retain = (header & 0x01) + + pack_format = "!H" + str(len(self._in_packet['packet'])-2) + 's' + (slen, packet) = struct.unpack(pack_format, self._in_packet['packet']) + pack_format = '!' + str(slen) + 's' + str(len(packet)-slen) + 's' + (message.topic, packet) = struct.unpack(pack_format, packet) + + if len(message.topic) == 0: + return MQTT_ERR_PROTOCOL + + if sys.version_info[0] >= 3: + message.topic = message.topic.decode('utf-8') + + if message.qos > 0: + pack_format = "!H" + str(len(packet)-2) + 's' + (message.mid, packet) = struct.unpack(pack_format, packet) + + message.payload = packet + + self._easy_log( + MQTT_LOG_DEBUG, + "Received PUBLISH (d"+str(message.dup)+ + ", q"+str(message.qos)+", r"+str(message.retain)+ + ", m"+str(message.mid)+", '"+message.topic+ + "', ... ("+str(len(message.payload))+" bytes)") + + message.timestamp = time.time() + if message.qos == 0: + self._handle_on_message(message) + return MQTT_ERR_SUCCESS + elif message.qos == 1: + rc = self._send_puback(message.mid) + self._handle_on_message(message) + return rc + elif message.qos == 2: + rc = self._send_pubrec(message.mid) + message.state = mqtt_ms_wait_for_pubrel + self._in_message_mutex.acquire() + self._in_messages.append(message) + self._in_message_mutex.release() + return rc + else: + return MQTT_ERR_PROTOCOL + + def _handle_pubrel(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 2: + return MQTT_ERR_PROTOCOL + + if len(self._in_packet['packet']) != 2: + return MQTT_ERR_PROTOCOL + + mid = struct.unpack("!H", self._in_packet['packet']) + mid = mid[0] + self._easy_log(MQTT_LOG_DEBUG, "Received PUBREL (Mid: "+str(mid)+")") + + self._in_message_mutex.acquire() + for i in range(len(self._in_messages)): + if self._in_messages[i].mid == mid: + + # Only pass the message on if we have removed it from the queue - this + # prevents multiple callbacks for the same message. + self._handle_on_message(self._in_messages[i]) + self._in_messages.pop(i) + self._inflight_messages = self._inflight_messages - 1 + if self._max_inflight_messages > 0: + self._out_message_mutex.acquire() + rc = self._update_inflight() + self._out_message_mutex.release() + if rc != MQTT_ERR_SUCCESS: + self._in_message_mutex.release() + return rc + + self._in_message_mutex.release() + return self._send_pubcomp(mid) + + self._in_message_mutex.release() + return MQTT_ERR_SUCCESS + + def _update_inflight(self): + # Dont lock message_mutex here + for m in self._out_messages: + if self._inflight_messages < self._max_inflight_messages: + if m.qos > 0 and m.state == mqtt_ms_queued: + self._inflight_messages = self._inflight_messages + 1 + if m.qos == 1: + m.state = mqtt_ms_wait_for_puback + elif m.qos == 2: + m.state = mqtt_ms_wait_for_pubrec + rc = self._send_publish(m.mid, m.topic, m.payload, m.qos, m.retain, m.dup) + if rc != 0: + return rc + else: + return MQTT_ERR_SUCCESS + return MQTT_ERR_SUCCESS + + def _handle_pubrec(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 2: + return MQTT_ERR_PROTOCOL + + mid = struct.unpack("!H", self._in_packet['packet']) + mid = mid[0] + self._easy_log(MQTT_LOG_DEBUG, "Received PUBREC (Mid: "+str(mid)+")") + + self._out_message_mutex.acquire() + for m in self._out_messages: + if m.mid == mid: + m.state = mqtt_ms_wait_for_pubcomp + m.timestamp = time.time() + self._out_message_mutex.release() + return self._send_pubrel(mid, False) + + self._out_message_mutex.release() + return MQTT_ERR_SUCCESS + + def _handle_unsuback(self): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 2: + return MQTT_ERR_PROTOCOL + + mid = struct.unpack("!H", self._in_packet['packet']) + mid = mid[0] + self._easy_log(MQTT_LOG_DEBUG, "Received UNSUBACK (Mid: "+str(mid)+")") + self._callback_mutex.acquire() + if self.on_unsubscribe: + self._in_callback = True + self.on_unsubscribe(self, self._userdata, mid) + self._in_callback = False + self._callback_mutex.release() + return MQTT_ERR_SUCCESS + + def _handle_pubackcomp(self, cmd): + if self._strict_protocol: + if self._in_packet['remaining_length'] != 2: + return MQTT_ERR_PROTOCOL + + mid = struct.unpack("!H", self._in_packet['packet']) + mid = mid[0] + self._easy_log(MQTT_LOG_DEBUG, "Received "+cmd+" (Mid: "+str(mid)+")") + + self._out_message_mutex.acquire() + for i in range(len(self._out_messages)): + try: + if self._out_messages[i].mid == mid: + # Only inform the client the message has been sent once. + self._callback_mutex.acquire() + if self.on_publish: + self._out_message_mutex.release() + self._in_callback = True + self.on_publish(self, self._userdata, mid) + self._in_callback = False + self._out_message_mutex.acquire() + + self._callback_mutex.release() + self._out_messages.pop(i) + self._inflight_messages = self._inflight_messages - 1 + if self._max_inflight_messages > 0: + rc = self._update_inflight() + if rc != MQTT_ERR_SUCCESS: + self._out_message_mutex.release() + return rc + self._out_message_mutex.release() + return MQTT_ERR_SUCCESS + except IndexError: + # Have removed item so i>count. + # Not really an error. + pass + + self._out_message_mutex.release() + return MQTT_ERR_SUCCESS + + def _handle_on_message(self, message): + self._callback_mutex.acquire() + matched = False + for t in self.on_message_filtered: + if topic_matches_sub(t[0], message.topic): + self._in_callback = True + t[1](self, self._userdata, message) + self._in_callback = False + matched = True + + if matched == False and self.on_message: + self._in_callback = True + self.on_message(self, self._userdata, message) + self._in_callback = False + + self._callback_mutex.release() + + def _thread_main(self): + self._state_mutex.acquire() + if self._state == mqtt_cs_connect_async: + self._state_mutex.release() + self.reconnect() + else: + self._state_mutex.release() + + self.loop_forever() + + def _host_matches_cert(self, host, cert_host): + if cert_host[0:2] == "*.": + if cert_host.count("*") != 1: + return False + + host_match = host.split(".", 1)[1] + cert_match = cert_host.split(".", 1)[1] + if host_match == cert_match: + return True + else: + return False + else: + if host == cert_host: + return True + else: + return False + + def _tls_match_hostname(self): + try: + cert = self._ssl.getpeercert() + except AttributeError: + # the getpeercert can throw Attribute error: object has no attribute 'peer_certificate' + # Don't let that crash the whole client. See also: http://bugs.python.org/issue13721 + raise ssl.SSLError('Not connected') + + san = cert.get('subjectAltName') + if san: + have_san_dns = False + for (key, value) in san: + if key == 'DNS': + have_san_dns = True + if self._host_matches_cert(self._host.lower(), value.lower()) == True: + return + if key == 'IP Address': + have_san_dns = True + if value.lower().strip() == self._host.lower().strip(): + return + + if have_san_dns: + # Only check subject if subjectAltName dns not found. + raise ssl.SSLError('Certificate subject does not match remote hostname.') + subject = cert.get('subject') + if subject: + for ((key, value),) in subject: + if key == 'commonName': + if self._host_matches_cert(self._host.lower(), value.lower()) == True: + return + + raise ssl.SSLError('Certificate subject does not match remote hostname.') + + +# Compatibility class for easy porting from mosquitto.py. +class Mosquitto(Client): + def __init__(self, client_id="", clean_session=True, userdata=None): + super(Mosquitto, self).__init__(client_id, clean_session, userdata) diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/shadow/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/shadow/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/shadow/deviceShadow.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/shadow/deviceShadow.py new file mode 100644 index 0000000..f58240a --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/shadow/deviceShadow.py @@ -0,0 +1,430 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import json +import logging +import uuid +from threading import Timer, Lock, Thread + + +class _shadowRequestToken: + + URN_PREFIX_LENGTH = 9 + + def getNextToken(self): + return uuid.uuid4().urn[self.URN_PREFIX_LENGTH:] # We only need the uuid digits, not the urn prefix + + +class _basicJSONParser: + + def setString(self, srcString): + self._rawString = srcString + self._dictionObject = None + + def regenerateString(self): + return json.dumps(self._dictionaryObject) + + def getAttributeValue(self, srcAttributeKey): + return self._dictionaryObject.get(srcAttributeKey) + + def setAttributeValue(self, srcAttributeKey, srcAttributeValue): + self._dictionaryObject[srcAttributeKey] = srcAttributeValue + + def validateJSON(self): + try: + self._dictionaryObject = json.loads(self._rawString) + except ValueError: + return False + return True + + +class deviceShadow: + _logger = logging.getLogger(__name__) + + def __init__(self, srcShadowName, srcIsPersistentSubscribe, srcShadowManager): + """ + + The class that denotes a local/client-side device shadow instance. + + Users can perform shadow operations on this instance to retrieve and modify the + corresponding shadow JSON document in AWS IoT Cloud. The following shadow operations + are available: + + - Get + + - Update + + - Delete + + - Listen on delta + + - Cancel listening on delta + + This is returned from :code:`AWSIoTPythonSDK.MQTTLib.AWSIoTMQTTShadowClient.createShadowWithName` function call. + No need to call directly from user scripts. + + """ + if srcShadowName is None or srcIsPersistentSubscribe is None or srcShadowManager is None: + raise TypeError("None type inputs detected.") + self._shadowName = srcShadowName + # Tool handler + self._shadowManagerHandler = srcShadowManager + self._basicJSONParserHandler = _basicJSONParser() + self._tokenHandler = _shadowRequestToken() + # Properties + self._isPersistentSubscribe = srcIsPersistentSubscribe + self._lastVersionInSync = -1 # -1 means not initialized + self._isGetSubscribed = False + self._isUpdateSubscribed = False + self._isDeleteSubscribed = False + self._shadowSubscribeCallbackTable = dict() + self._shadowSubscribeCallbackTable["get"] = None + self._shadowSubscribeCallbackTable["delete"] = None + self._shadowSubscribeCallbackTable["update"] = None + self._shadowSubscribeCallbackTable["delta"] = None + self._shadowSubscribeStatusTable = dict() + self._shadowSubscribeStatusTable["get"] = 0 + self._shadowSubscribeStatusTable["delete"] = 0 + self._shadowSubscribeStatusTable["update"] = 0 + self._tokenPool = dict() + self._dataStructureLock = Lock() + + def _doNonPersistentUnsubscribe(self, currentAction): + self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, currentAction) + self._logger.info("Unsubscribed to " + currentAction + " accepted/rejected topics for deviceShadow: " + self._shadowName) + + def generalCallback(self, client, userdata, message): + # In Py3.x, message.payload comes in as a bytes(string) + # json.loads needs a string input + with self._dataStructureLock: + currentTopic = message.topic + currentAction = self._parseTopicAction(currentTopic) # get/delete/update/delta + currentType = self._parseTopicType(currentTopic) # accepted/rejected/delta + payloadUTF8String = message.payload.decode('utf-8') + # get/delete/update: Need to deal with token, timer and unsubscribe + if currentAction in ["get", "delete", "update"]: + # Check for token + self._basicJSONParserHandler.setString(payloadUTF8String) + if self._basicJSONParserHandler.validateJSON(): # Filter out invalid JSON + currentToken = self._basicJSONParserHandler.getAttributeValue(u"clientToken") + if currentToken is not None: + self._logger.debug("shadow message clientToken: " + currentToken) + if currentToken is not None and currentToken in self._tokenPool.keys(): # Filter out JSON without the desired token + # Sync local version when it is an accepted response + self._logger.debug("Token is in the pool. Type: " + currentType) + if currentType == "accepted": + incomingVersion = self._basicJSONParserHandler.getAttributeValue(u"version") + # If it is get/update accepted response, we need to sync the local version + if incomingVersion is not None and incomingVersion > self._lastVersionInSync and currentAction != "delete": + self._lastVersionInSync = incomingVersion + # If it is a delete accepted, we need to reset the version + else: + self._lastVersionInSync = -1 # The version will always be synced for the next incoming delta/GU-accepted response + # Cancel the timer and clear the token + self._tokenPool[currentToken].cancel() + del self._tokenPool[currentToken] + # Need to unsubscribe? + self._shadowSubscribeStatusTable[currentAction] -= 1 + if not self._isPersistentSubscribe and self._shadowSubscribeStatusTable.get(currentAction) <= 0: + self._shadowSubscribeStatusTable[currentAction] = 0 + processNonPersistentUnsubscribe = Thread(target=self._doNonPersistentUnsubscribe, args=[currentAction]) + processNonPersistentUnsubscribe.start() + # Custom callback + if self._shadowSubscribeCallbackTable.get(currentAction) is not None: + processCustomCallback = Thread(target=self._shadowSubscribeCallbackTable[currentAction], args=[payloadUTF8String, currentType, currentToken]) + processCustomCallback.start() + # delta: Watch for version + else: + currentType += "/" + self._parseTopicShadowName(currentTopic) + # Sync local version + self._basicJSONParserHandler.setString(payloadUTF8String) + if self._basicJSONParserHandler.validateJSON(): # Filter out JSON without version + incomingVersion = self._basicJSONParserHandler.getAttributeValue(u"version") + if incomingVersion is not None and incomingVersion > self._lastVersionInSync: + self._lastVersionInSync = incomingVersion + # Custom callback + if self._shadowSubscribeCallbackTable.get(currentAction) is not None: + processCustomCallback = Thread(target=self._shadowSubscribeCallbackTable[currentAction], args=[payloadUTF8String, currentType, None]) + processCustomCallback.start() + + def _parseTopicAction(self, srcTopic): + ret = None + fragments = srcTopic.split('/') + if fragments[5] == "delta": + ret = "delta" + else: + ret = fragments[4] + return ret + + def _parseTopicType(self, srcTopic): + fragments = srcTopic.split('/') + return fragments[5] + + def _parseTopicShadowName(self, srcTopic): + fragments = srcTopic.split('/') + return fragments[2] + + def _timerHandler(self, srcActionName, srcToken): + with self._dataStructureLock: + # Don't crash if we try to remove an unknown token + if srcToken not in self._tokenPool: + self._logger.warn('Tried to remove non-existent token from pool: %s' % str(srcToken)) + return + # Remove the token + del self._tokenPool[srcToken] + # Need to unsubscribe? + self._shadowSubscribeStatusTable[srcActionName] -= 1 + if not self._isPersistentSubscribe and self._shadowSubscribeStatusTable.get(srcActionName) <= 0: + self._shadowSubscribeStatusTable[srcActionName] = 0 + self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, srcActionName) + # Notify time-out issue + if self._shadowSubscribeCallbackTable.get(srcActionName) is not None: + self._logger.info("Shadow request with token: " + str(srcToken) + " has timed out.") + self._shadowSubscribeCallbackTable[srcActionName]("REQUEST TIME OUT", "timeout", srcToken) + + def shadowGet(self, srcCallback, srcTimeout): + """ + **Description** + + Retrieve the device shadow JSON document from AWS IoT by publishing an empty JSON document to the + corresponding shadow topics. Shadow response topics will be subscribed to receive responses from + AWS IoT regarding the result of the get operation. Retrieved shadow JSON document will be available + in the registered callback. If no response is received within the provided timeout, a timeout + notification will be passed into the registered callback. + + **Syntax** + + .. code:: python + + # Retrieve the shadow JSON document from AWS IoT, with a timeout set to 5 seconds + BotShadow.shadowGet(customCallback, 5) + + **Parameters** + + *srcCallback* - Function to be called when the response for this shadow request comes back. Should + be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the + JSON document returned, :code:`responseStatus` indicates whether the request has been accepted, + rejected or is a delta message, :code:`token` is the token used for tracing in this request. + + *srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout, + a timeout notification will be generated and put into the registered callback to notify users. + + **Returns** + + The token used for tracing in this shadow request. + + """ + with self._dataStructureLock: + # Update callback data structure + self._shadowSubscribeCallbackTable["get"] = srcCallback + # Update number of pending feedback + self._shadowSubscribeStatusTable["get"] += 1 + # clientToken + currentToken = self._tokenHandler.getNextToken() + self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["get", currentToken]) + self._basicJSONParserHandler.setString("{}") + self._basicJSONParserHandler.validateJSON() + self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken) + currentPayload = self._basicJSONParserHandler.regenerateString() + # Two subscriptions + if not self._isPersistentSubscribe or not self._isGetSubscribed: + self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "get", self.generalCallback) + self._isGetSubscribed = True + self._logger.info("Subscribed to get accepted/rejected topics for deviceShadow: " + self._shadowName) + # One publish + self._shadowManagerHandler.basicShadowPublish(self._shadowName, "get", currentPayload) + # Start the timer + self._tokenPool[currentToken].start() + return currentToken + + def shadowDelete(self, srcCallback, srcTimeout): + """ + **Description** + + Delete the device shadow from AWS IoT by publishing an empty JSON document to the corresponding + shadow topics. Shadow response topics will be subscribed to receive responses from AWS IoT + regarding the result of the get operation. Responses will be available in the registered callback. + If no response is received within the provided timeout, a timeout notification will be passed into + the registered callback. + + **Syntax** + + .. code:: python + + # Delete the device shadow from AWS IoT, with a timeout set to 5 seconds + BotShadow.shadowDelete(customCallback, 5) + + **Parameters** + + *srcCallback* - Function to be called when the response for this shadow request comes back. Should + be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the + JSON document returned, :code:`responseStatus` indicates whether the request has been accepted, + rejected or is a delta message, :code:`token` is the token used for tracing in this request. + + *srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout, + a timeout notification will be generated and put into the registered callback to notify users. + + **Returns** + + The token used for tracing in this shadow request. + + """ + with self._dataStructureLock: + # Update callback data structure + self._shadowSubscribeCallbackTable["delete"] = srcCallback + # Update number of pending feedback + self._shadowSubscribeStatusTable["delete"] += 1 + # clientToken + currentToken = self._tokenHandler.getNextToken() + self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["delete", currentToken]) + self._basicJSONParserHandler.setString("{}") + self._basicJSONParserHandler.validateJSON() + self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken) + currentPayload = self._basicJSONParserHandler.regenerateString() + # Two subscriptions + if not self._isPersistentSubscribe or not self._isDeleteSubscribed: + self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "delete", self.generalCallback) + self._isDeleteSubscribed = True + self._logger.info("Subscribed to delete accepted/rejected topics for deviceShadow: " + self._shadowName) + # One publish + self._shadowManagerHandler.basicShadowPublish(self._shadowName, "delete", currentPayload) + # Start the timer + self._tokenPool[currentToken].start() + return currentToken + + def shadowUpdate(self, srcJSONPayload, srcCallback, srcTimeout): + """ + **Description** + + Update the device shadow JSON document string from AWS IoT by publishing the provided JSON + document to the corresponding shadow topics. Shadow response topics will be subscribed to + receive responses from AWS IoT regarding the result of the get operation. Response will be + available in the registered callback. If no response is received within the provided timeout, + a timeout notification will be passed into the registered callback. + + **Syntax** + + .. code:: python + + # Update the shadow JSON document from AWS IoT, with a timeout set to 5 seconds + BotShadow.shadowUpdate(newShadowJSONDocumentString, customCallback, 5) + + **Parameters** + + *srcJSONPayload* - JSON document string used to update shadow JSON document in AWS IoT. + + *srcCallback* - Function to be called when the response for this shadow request comes back. Should + be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the + JSON document returned, :code:`responseStatus` indicates whether the request has been accepted, + rejected or is a delta message, :code:`token` is the token used for tracing in this request. + + *srcTimeout* - Timeout to determine whether the request is invalid. When a request gets timeout, + a timeout notification will be generated and put into the registered callback to notify users. + + **Returns** + + The token used for tracing in this shadow request. + + """ + # Validate JSON + self._basicJSONParserHandler.setString(srcJSONPayload) + if self._basicJSONParserHandler.validateJSON(): + with self._dataStructureLock: + # clientToken + currentToken = self._tokenHandler.getNextToken() + self._tokenPool[currentToken] = Timer(srcTimeout, self._timerHandler, ["update", currentToken]) + self._basicJSONParserHandler.setAttributeValue("clientToken", currentToken) + JSONPayloadWithToken = self._basicJSONParserHandler.regenerateString() + # Update callback data structure + self._shadowSubscribeCallbackTable["update"] = srcCallback + # Update number of pending feedback + self._shadowSubscribeStatusTable["update"] += 1 + # Two subscriptions + if not self._isPersistentSubscribe or not self._isUpdateSubscribed: + self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "update", self.generalCallback) + self._isUpdateSubscribed = True + self._logger.info("Subscribed to update accepted/rejected topics for deviceShadow: " + self._shadowName) + # One publish + self._shadowManagerHandler.basicShadowPublish(self._shadowName, "update", JSONPayloadWithToken) + # Start the timer + self._tokenPool[currentToken].start() + else: + raise ValueError("Invalid JSON file.") + return currentToken + + def shadowRegisterDeltaCallback(self, srcCallback): + """ + **Description** + + Listen on delta topics for this device shadow by subscribing to delta topics. Whenever there + is a difference between the desired and reported state, the registered callback will be called + and the delta payload will be available in the callback. + + **Syntax** + + .. code:: python + + # Listen on delta topics for BotShadow + BotShadow.shadowRegisterDeltaCallback(customCallback) + + **Parameters** + + *srcCallback* - Function to be called when the response for this shadow request comes back. Should + be in form :code:`customCallback(payload, responseStatus, token)`, where :code:`payload` is the + JSON document returned, :code:`responseStatus` indicates whether the request has been accepted, + rejected or is a delta message, :code:`token` is the token used for tracing in this request. + + **Returns** + + None + + """ + with self._dataStructureLock: + # Update callback data structure + self._shadowSubscribeCallbackTable["delta"] = srcCallback + # One subscription + self._shadowManagerHandler.basicShadowSubscribe(self._shadowName, "delta", self.generalCallback) + self._logger.info("Subscribed to delta topic for deviceShadow: " + self._shadowName) + + def shadowUnregisterDeltaCallback(self): + """ + **Description** + + Cancel listening on delta topics for this device shadow by unsubscribing to delta topics. There will + be no delta messages received after this API call even though there is a difference between the + desired and reported state. + + **Syntax** + + .. code:: python + + # Cancel listening on delta topics for BotShadow + BotShadow.shadowUnregisterDeltaCallback() + + **Parameters** + + None + + **Returns** + + None + + """ + with self._dataStructureLock: + # Update callback data structure + del self._shadowSubscribeCallbackTable["delta"] + # One unsubscription + self._shadowManagerHandler.basicShadowUnsubscribe(self._shadowName, "delta") + self._logger.info("Unsubscribed to delta topics for deviceShadow: " + self._shadowName) diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/shadow/shadowManager.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/shadow/shadowManager.py new file mode 100644 index 0000000..3dafa74 --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/shadow/shadowManager.py @@ -0,0 +1,83 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import logging +import time +from threading import Lock + +class _shadowAction: + _actionType = ["get", "update", "delete", "delta"] + + def __init__(self, srcShadowName, srcActionName): + if srcActionName is None or srcActionName not in self._actionType: + raise TypeError("Unsupported shadow action.") + self._shadowName = srcShadowName + self._actionName = srcActionName + self.isDelta = srcActionName == "delta" + if self.isDelta: + self._topicDelta = "$aws/things/" + str(self._shadowName) + "/shadow/update/delta" + else: + self._topicGeneral = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) + self._topicAccept = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) + "/accepted" + self._topicReject = "$aws/things/" + str(self._shadowName) + "/shadow/" + str(self._actionName) + "/rejected" + + def getTopicGeneral(self): + return self._topicGeneral + + def getTopicAccept(self): + return self._topicAccept + + def getTopicReject(self): + return self._topicReject + + def getTopicDelta(self): + return self._topicDelta + + +class shadowManager: + + _logger = logging.getLogger(__name__) + + def __init__(self, srcMQTTCore): + # Load in mqttCore + if srcMQTTCore is None: + raise TypeError("None type inputs detected.") + self._mqttCoreHandler = srcMQTTCore + self._shadowSubUnsubOperationLock = Lock() + + def basicShadowPublish(self, srcShadowName, srcShadowAction, srcPayload): + currentShadowAction = _shadowAction(srcShadowName, srcShadowAction) + self._mqttCoreHandler.publish(currentShadowAction.getTopicGeneral(), srcPayload, 0, False) + + def basicShadowSubscribe(self, srcShadowName, srcShadowAction, srcCallback): + with self._shadowSubUnsubOperationLock: + currentShadowAction = _shadowAction(srcShadowName, srcShadowAction) + if currentShadowAction.isDelta: + self._mqttCoreHandler.subscribe(currentShadowAction.getTopicDelta(), 0, srcCallback) + else: + self._mqttCoreHandler.subscribe(currentShadowAction.getTopicAccept(), 0, srcCallback) + self._mqttCoreHandler.subscribe(currentShadowAction.getTopicReject(), 0, srcCallback) + time.sleep(2) + + def basicShadowUnsubscribe(self, srcShadowName, srcShadowAction): + with self._shadowSubUnsubOperationLock: + currentShadowAction = _shadowAction(srcShadowName, srcShadowAction) + if currentShadowAction.isDelta: + self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicDelta()) + else: + self._logger.debug(currentShadowAction.getTopicAccept()) + self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicAccept()) + self._logger.debug(currentShadowAction.getTopicReject()) + self._mqttCoreHandler.unsubscribe(currentShadowAction.getTopicReject()) diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/util/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/util/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/util/enums.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/util/enums.py new file mode 100644 index 0000000..3aa3d2f --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/util/enums.py @@ -0,0 +1,19 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +class DropBehaviorTypes(object): + DROP_OLDEST = 0 + DROP_NEWEST = 1 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/util/providers.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/util/providers.py new file mode 100644 index 0000000..d90789a --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/core/util/providers.py @@ -0,0 +1,92 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +class CredentialsProvider(object): + + def __init__(self): + self._ca_path = "" + + def set_ca_path(self, ca_path): + self._ca_path = ca_path + + def get_ca_path(self): + return self._ca_path + + +class CertificateCredentialsProvider(CredentialsProvider): + + def __init__(self): + CredentialsProvider.__init__(self) + self._cert_path = "" + self._key_path = "" + + def set_cert_path(self,cert_path): + self._cert_path = cert_path + + def set_key_path(self, key_path): + self._key_path = key_path + + def get_cert_path(self): + return self._cert_path + + def get_key_path(self): + return self._key_path + + +class IAMCredentialsProvider(CredentialsProvider): + + def __init__(self): + CredentialsProvider.__init__(self) + self._aws_access_key_id = "" + self._aws_secret_access_key = "" + self._aws_session_token = "" + + def set_access_key_id(self, access_key_id): + self._aws_access_key_id = access_key_id + + def set_secret_access_key(self, secret_access_key): + self._aws_secret_access_key = secret_access_key + + def set_session_token(self, session_token): + self._aws_session_token = session_token + + def get_access_key_id(self): + return self._aws_access_key_id + + def get_secret_access_key(self): + return self._aws_secret_access_key + + def get_session_token(self): + return self._aws_session_token + + +class EndpointProvider(object): + + def __init__(self): + self._host = "" + self._port = -1 + + def set_host(self, host): + self._host = host + + def set_port(self, port): + self._port = port + + def get_host(self): + return self._host + + def get_port(self): + return self._port diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/AWSIoTExceptions.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/AWSIoTExceptions.py new file mode 100644 index 0000000..0de5401 --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/AWSIoTExceptions.py @@ -0,0 +1,153 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + +import AWSIoTPythonSDK.exception.operationTimeoutException as operationTimeoutException +import AWSIoTPythonSDK.exception.operationError as operationError + + +# Serial Exception +class acceptTimeoutException(Exception): + def __init__(self, msg="Accept Timeout"): + self.message = msg + + +# MQTT Operation Timeout Exception +class connectTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, msg="Connect Timeout"): + self.message = msg + + +class disconnectTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, msg="Disconnect Timeout"): + self.message = msg + + +class publishTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, msg="Publish Timeout"): + self.message = msg + + +class subscribeTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, msg="Subscribe Timeout"): + self.message = msg + + +class unsubscribeTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, msg="Unsubscribe Timeout"): + self.message = msg + + +# MQTT Operation Error +class connectError(operationError.operationError): + def __init__(self, errorCode): + self.message = "Connect Error: " + str(errorCode) + + +class disconnectError(operationError.operationError): + def __init__(self, errorCode): + self.message = "Disconnect Error: " + str(errorCode) + + +class publishError(operationError.operationError): + def __init__(self, errorCode): + self.message = "Publish Error: " + str(errorCode) + + +class publishQueueFullException(operationError.operationError): + def __init__(self): + self.message = "Internal Publish Queue Full" + + +class publishQueueDisabledException(operationError.operationError): + def __init__(self): + self.message = "Offline publish request dropped because queueing is disabled" + + +class subscribeError(operationError.operationError): + def __init__(self, errorCode): + self.message = "Subscribe Error: " + str(errorCode) + + +class subscribeQueueFullException(operationError.operationError): + def __init__(self): + self.message = "Internal Subscribe Queue Full" + + +class subscribeQueueDisabledException(operationError.operationError): + def __init__(self): + self.message = "Offline subscribe request dropped because queueing is disabled" + + +class unsubscribeError(operationError.operationError): + def __init__(self, errorCode): + self.message = "Unsubscribe Error: " + str(errorCode) + + +class unsubscribeQueueFullException(operationError.operationError): + def __init__(self): + self.message = "Internal Unsubscribe Queue Full" + + +class unsubscribeQueueDisabledException(operationError.operationError): + def __init__(self): + self.message = "Offline unsubscribe request dropped because queueing is disabled" + + +# Websocket Error +class wssNoKeyInEnvironmentError(operationError.operationError): + def __init__(self): + self.message = "No AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY detected in $ENV." + + +class wssHandShakeError(operationError.operationError): + def __init__(self): + self.message = "Error in WSS handshake." + + +# Greengrass Discovery Error +class DiscoveryDataNotFoundException(operationError.operationError): + def __init__(self): + self.message = "No discovery data found" + + +class DiscoveryTimeoutException(operationTimeoutException.operationTimeoutException): + def __init__(self, message="Discovery request timed out"): + self.message = message + + +class DiscoveryInvalidRequestException(operationError.operationError): + def __init__(self): + self.message = "Invalid discovery request" + + +class DiscoveryUnauthorizedException(operationError.operationError): + def __init__(self): + self.message = "Discovery request not authorized" + + +class DiscoveryThrottlingException(operationError.operationError): + def __init__(self): + self.message = "Too many discovery requests" + + +class DiscoveryFailure(operationError.operationError): + def __init__(self, message): + self.message = message + + +# Client Error +class ClientError(Exception): + def __init__(self, message): + self.message = message diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/__init__.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/operationError.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/operationError.py new file mode 100644 index 0000000..1c86dfc --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/operationError.py @@ -0,0 +1,19 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +class operationError(Exception): + def __init__(self, msg="Operation Error"): + self.message = msg diff --git a/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/operationTimeoutException.py b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/operationTimeoutException.py new file mode 100644 index 0000000..737154e --- /dev/null +++ b/aws-iot-device-sdk-python/build/lib.linux-x86_64-2.7/AWSIoTPythonSDK/exception/operationTimeoutException.py @@ -0,0 +1,19 @@ +# /* +# * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# * +# * Licensed under the Apache License, Version 2.0 (the "License"). +# * You may not use this file except in compliance with the License. +# * A copy of the License is located at +# * +# * http://aws.amazon.com/apache2.0 +# * +# * or in the "license" file accompanying this file. This file is distributed +# * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either +# * express or implied. See the License for the specific language governing +# * permissions and limitations under the License. +# */ + + +class operationTimeoutException(Exception): + def __init__(self, msg="Operation Timeout"): + self.message = msg diff --git a/aws-iot-device-sdk-python/setup.cfg b/aws-iot-device-sdk-python/setup.cfg new file mode 100644 index 0000000..5aef279 --- /dev/null +++ b/aws-iot-device-sdk-python/setup.cfg @@ -0,0 +1,2 @@ +[metadata] +description-file = README.rst diff --git a/aws-iot-device-sdk-python/setup.py b/aws-iot-device-sdk-python/setup.py new file mode 100644 index 0000000..3846bae --- /dev/null +++ b/aws-iot-device-sdk-python/setup.py @@ -0,0 +1,34 @@ +import sys +sys.path.insert(0, 'AWSIoTPythonSDK') +import AWSIoTPythonSDK +currentVersion = AWSIoTPythonSDK.__version__ + +from distutils.core import setup +setup( + name = 'AWSIoTPythonSDK', + packages=['AWSIoTPythonSDK', 'AWSIoTPythonSDK.core', + 'AWSIoTPythonSDK.core.util', 'AWSIoTPythonSDK.core.shadow', 'AWSIoTPythonSDK.core.protocol', + 'AWSIoTPythonSDK.core.jobs', + 'AWSIoTPythonSDK.core.protocol.paho', 'AWSIoTPythonSDK.core.protocol.internal', + 'AWSIoTPythonSDK.core.protocol.connection', 'AWSIoTPythonSDK.core.greengrass', + 'AWSIoTPythonSDK.core.greengrass.discovery', 'AWSIoTPythonSDK.exception'], + version = currentVersion, + description = 'SDK for connecting to AWS IoT using Python.', + author = 'Amazon Web Service', + author_email = '', + url = 'https://github.com/aws/aws-iot-device-sdk-python.git', + download_url = 'https://s3.amazonaws.com/aws-iot-device-sdk-python/aws-iot-device-sdk-python-latest.zip', + keywords = ['aws', 'iot', 'mqtt'], + classifiers = [ + "Development Status :: 5 - Production/Stable", + "Intended Audience :: Developers", + "Natural Language :: English", + "License :: OSI Approved :: Apache Software License", + "Programming Language :: Python", + "Programming Language :: Python :: 2.7", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.3", + "Programming Language :: Python :: 3.4", + "Programming Language :: Python :: 3.5" + ] +) diff --git a/data_points.py b/data_points.py new file mode 100644 index 0000000..b7894b8 --- /dev/null +++ b/data_points.py @@ -0,0 +1,119 @@ +from datetime import datetime +import time +import minimalmodbus +from pycomm.ab_comm.clx import Driver as clx +from pycomm.cip.cip_base import CommError, DataError + +class DataPoint(object): + def __init__(self,changeThreshold=0,guaranteed=3600, name="datapoint",alertThreshold=[],alertCondition=[],alertResponse=[],alertContact=[]): + self.value = None + self.lastvalue = None + self.lastsend = 0 + self.changeThreshold = changeThreshold + self.guaranteed = guaranteed + self.name = name + self.alerted = False + self.alertThreshold = alertThreshold + self.alertCondition = alertCondition + self.alertResponse = alertResponse + self.alertContact = alertContact + + + def checkSend(self,value): + if value != self.lastvalue or (time.time() - self.lastsend > self.guaranteed): + self.lastsend = time.time() + self.lastvalue = value + return True + else: + return False + + def checkAlert(self,value): + conditions = { + "gt": "value > threshold", + "lt": "value < threshold", + "eq": "value == threshold", + "gte": "value >= threshold", + "lte": "value <= threshold", + "not": "value != threshold" + } + + for thres,cond in zip(self.alertThreshold,self.alertCondition): + #check value for alert threshold + evalVars = { + "value": value, + "threshold": thres + } + func = conditions.get(cond) + if func == None: + print("Not an available function: {}".format(cond)) + else: + if eval(func, evalVars): + return {"message":"Read value for {} is {} threshold value {}".format(self.name,value,thres)} + else: + self.alerted = False + return None + + +class modbusDataPoint(DataPoint): + def __init__(self,changeThreshold,guaranteed,name,register=1,baud=19200,stopBits=1,parity=None, device='/dev/ttyS0'): + DataPoint.__init__(self,changeThreshold,guaranteed,name) + self.register = register + self.baud = baud + self.stopBits = stopBits + self.parity = parity + self.device = device + def read(self): + pass + + def write(self): + pass + +class plcDataPoint(DataPoint): + def __init__(self,changeThreshold,guaranteed,name,plcIP='192.168.1.10',plcType='Micro800',tag=None,alertThreshold=[],alertCondition=[],alertResponse=[],alertContact=[]): + DataPoint.__init__(self,changeThreshold,guaranteed,name,alertThreshold,alertCondition,alertResponse,alertContact) + self.plcIP = plcIP + self.plcType = plcType + self.tag = tag + + def read(self): + direct_connect = self.plcType == "Micro800" + c = clx() + try: + if c.open(self.plcIP,direct_connect): + try: + val = c.read_tag(self.tag) + c.close() + alertMessage = self.checkAlert(val[0]) + return val[0], alertMessage + except DataError as derr: + print("Error: {}".format(derr)) + c.close() + except CommError as cerr: + print("Error: {}".format(cerr)) + + return False + + def write(self): + pass + +class currentDataPoint(DataPoint): + def __init__(self,changeThreshold,guaranteed,name, euMin=0, euMax=100, rawMin=4, rawMax=20): + DataPoint.__init__(self,changeThreshold,guaranteed,name) + self.euMin = euMin + self.euMax = euMax + self.rawMin = rawMin + self.rawMax = rawMax + + def read(self): + pass + +class voltageDataPoint(DataPoint): + def __init__(self,changeThreshold,guaranteed,name, euMin=0, euMax=100, rawMin=0, rawMax=10): + DataPoint.__init__(self,changeThreshold,guaranteed,name) + self.euMin = euMin + self.euMax = euMax + self.rawMin = rawMin + self.rawMax = rawMax + + def read(self): + pass diff --git a/data_points.pyc b/data_points.pyc new file mode 100644 index 0000000..8a1ce46 Binary files /dev/null and b/data_points.pyc differ diff --git a/device1Cert.key b/device1Cert.key new file mode 100644 index 0000000..cdf13d5 --- /dev/null +++ b/device1Cert.key @@ -0,0 +1,27 @@ +-----BEGIN RSA PRIVATE KEY----- +MIIEowIBAAKCAQEAtle0G78fyQE4l1IoYnSp7iaUzmFpYc1tJkP2KHxmHxXwkvqI +S+gRap582ngcBccXaFnG48+ooagmcv5DQaaRSYrdKP6XFNgd86jwPEDWHWFKg7/C +JVQpMauzQd8DUIX7hcMuS0jD2BDfAxVIoZCOrT1ow5fbRb3tKKltN4szLERl6QLJ +89OjT8P+ZpuW02Zpw0pjMKSLeCdjjBMsa8ELuRQGwzTS/+cfXlN21zemDzUv/Udp +VNH+tYbRGO/kxfm1k9WVOZiidARjG/bTWMlJl71Li6K0mMqEv3Qta3XArhu2/GC0 +E7JLtPnKNQonieEmlXyNR26kHEcUAgOfxy/J+QIDAQABAoIBAHhCOZg/IiB4fLFY +Tyg4F0bpDSVcG5uUV5NwKS4kdVm1J5hYQYIGiU4PPvr7UkgBOY/p/gGLmIUdmFYp +GYR37cRaIGiEGHJ34rErz119yXlRDEr+MnZaHl0TB8O+6Lm3094xjxu53uhmoB6x +b9iWtXLOWIT/Z2+ExqAVteF3HgXn7LE4B/bzZ/9571M8+DRcMMxUhh5+aFxldwY4 +AJa9JgIiBnRoRUO0U9c6tkIG8M6Xq5uFGMnd1CZMEz9QCKAbzxcH8eVy2R/k/hc/ +N+g1Zx8TxzpKYmaFPk+vZnt9AVcKxadjXiDSFPV4xZ5fpnoIO9mpw6he1sqv5AVB +Ni8hcDECgYEA6CIF7lgTSWCUGPCDF1ZrQykrOdCD86nthuzWU4wRmBE1KQK+RSt0 +gNM38yDOtHE3L04iaSC9dqmGbvae62W3AijOA95G0eY6nJP4ia/e/sfbkI4BXOAX +5k5m0ZV9HMNAMpthVtrf7ZkFPF7+suYp8Eoc2qo1hPY2+PnjPmplKc0CgYEAyRcl +7mI6Kk9ciZYTsSeVjPTQszLEga9nqoFhfhRGdFFC/LmwZ1gGSFuag30wWMbf0Chi +rDyLzduccfacSdkPEKAuKThe7iI63XHsWMQrgwi5I84+k4rDR9nhjAezrrbfZfhu +S2xEBWB6OX0yFbeVFfTqXBlzScuiymwEwoSBhN0CgYEAlWjAtIYP89yrtdmoJq9C +3rlyzwV8yKqI7Z0m3iN7d4sr0jeny9GKbRiGHIDzSoTMZjA+Sbf++o9mraki5JRV +VJh68VZx8svi0cET6Vs/hnGQytv72JGMEHpKB3/WRVsOyQPlhQfftYgWLKNgADnQ +qI6rP7rqM6hd/aapMxU8A8kCgYB/Dqo/2j7INwbQRExC9jDvNEydvWkeS/cja8Zv +BF6T5jh+ONG2Ko8lrwONK0+d+GK4Qpw+Ga94LdfGxjxwCL8VETC5iM2qh2RMQUxF +tgWMMLnSXuF5FgdXYdq6QK+OqCu1YWhHLaw4/YGcy3cW8702d16RPN90dD9yyRek +1FaF3QKBgEDic6rSZOCMxV2CNpPgPSR0KcK01vycyj0V433g0PSoZ+qwbD2qMeZL +w5A2qWaAmzVSVsKrFWhbEN9tFIPPOU6oyEtEW8KdP+lGcf1ks9Y65gGfHzU5sEfb +FYareLdzs2GTluMTGnk4uS1cjT2sQDitLjrOw9YqWa4BmSvdhcW3 +-----END RSA PRIVATE KEY----- diff --git a/device1Cert.pem b/device1Cert.pem new file mode 100644 index 0000000..ebe2afd --- /dev/null +++ b/device1Cert.pem @@ -0,0 +1,23 @@ +-----BEGIN CERTIFICATE----- +MIID0jCCAroCFFjR75nvGyoFpSn0YFt3YZ0ejZ7GMA0GCSqGSIb3DQEBCwUAMIGQ +MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB01pZGxhbmQx +EzARBgNVBAoMCkhlbnJ5IFB1bXAxEzARBgNVBAsMCkF1dG9tYXRpb24xDjAMBgNV +BAMMBUhQSW9UMSUwIwYJKoZIhvcNAQkBFhZub3JlcGx5QGhlbnJ5LXB1bXAuY29t +MB4XDTIwMDEyMDIwMjQwOFoXDTIxMDExOTIwMjQwOFowgbkxCzAJBgNVBAYTAlVT +MQ4wDAYDVQQHDAVUZXhhczETMBEGA1UECgwKSGVucnkgUHVtcDETMBEGA1UECwwK +QXV0b21hdGlvbjFJMEcGA1UEAwxAZjUyYzliZWQwOTk3YzhmOTJiNDFiYzA4NWMy +MGIwZWFhNDdmYmZhOGY3OGJiODYzMTAwODdhMjRhODcyMTQwMTElMCMGCSqGSIb3 +DQEJARYWbm9yZXBseUBoZW5yeS1wdW1wLmNvbTCCASIwDQYJKoZIhvcNAQEBBQAD +ggEPADCCAQoCggEBALZXtBu/H8kBOJdSKGJ0qe4mlM5haWHNbSZD9ih8Zh8V8JL6 +iEvoEWqefNp4HAXHF2hZxuPPqKGoJnL+Q0GmkUmK3Sj+lxTYHfOo8DxA1h1hSoO/ +wiVUKTGrs0HfA1CF+4XDLktIw9gQ3wMVSKGQjq09aMOX20W97SipbTeLMyxEZekC +yfPTo0/D/mabltNmacNKYzCki3gnY4wTLGvBC7kUBsM00v/nH15Tdtc3pg81L/1H +aVTR/rWG0Rjv5MX5tZPVlTmYonQEYxv201jJSZe9S4uitJjKhL90LWt1wK4btvxg +tBOyS7T5yjUKJ4nhJpV8jUdupBxHFAIDn8cvyfkCAwEAATANBgkqhkiG9w0BAQsF +AAOCAQEATPlVtR0/I+fy5iSmLKoBexQPC4utffCyppW+onoLCAetpKpCpsyYtb74 +FkefqCIyjcpjuKJJNnKVHGUr7hr3L3hDzybTxNu8LUpfioNPlbNjdowi29W3I1MX +2miDwylAL4F5X/hQkmJ8jxdLFdI2obcGqo7vzvryY25BRhT9H5VOcDYNlC/gvaN1 +exsv8bIyo1BdwVzcW0ucDRjXbbUNBkMM6J7LLh4X3ZvAxe62CQfrw3pUmeml+bi1 +IGSmA0QgJwtH+LVbqHlQfOhQFHrBr8SfrbyqDyqeRG13eaiwjqAczR902IHG1pev +ZOAqwqO3Vaf6yYh80iX3hFDKZ5QN+A== +-----END CERTIFICATE----- diff --git a/device1CertAndCACert.pem b/device1CertAndCACert.pem new file mode 100644 index 0000000..ad315f3 --- /dev/null +++ b/device1CertAndCACert.pem @@ -0,0 +1,47 @@ +-----BEGIN CERTIFICATE----- +MIID0jCCAroCFFjR75nvGyoFpSn0YFt3YZ0ejZ7GMA0GCSqGSIb3DQEBCwUAMIGQ +MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB01pZGxhbmQx +EzARBgNVBAoMCkhlbnJ5IFB1bXAxEzARBgNVBAsMCkF1dG9tYXRpb24xDjAMBgNV +BAMMBUhQSW9UMSUwIwYJKoZIhvcNAQkBFhZub3JlcGx5QGhlbnJ5LXB1bXAuY29t +MB4XDTIwMDEyMDIwMjQwOFoXDTIxMDExOTIwMjQwOFowgbkxCzAJBgNVBAYTAlVT +MQ4wDAYDVQQHDAVUZXhhczETMBEGA1UECgwKSGVucnkgUHVtcDETMBEGA1UECwwK +QXV0b21hdGlvbjFJMEcGA1UEAwxAZjUyYzliZWQwOTk3YzhmOTJiNDFiYzA4NWMy +MGIwZWFhNDdmYmZhOGY3OGJiODYzMTAwODdhMjRhODcyMTQwMTElMCMGCSqGSIb3 +DQEJARYWbm9yZXBseUBoZW5yeS1wdW1wLmNvbTCCASIwDQYJKoZIhvcNAQEBBQAD +ggEPADCCAQoCggEBALZXtBu/H8kBOJdSKGJ0qe4mlM5haWHNbSZD9ih8Zh8V8JL6 +iEvoEWqefNp4HAXHF2hZxuPPqKGoJnL+Q0GmkUmK3Sj+lxTYHfOo8DxA1h1hSoO/ +wiVUKTGrs0HfA1CF+4XDLktIw9gQ3wMVSKGQjq09aMOX20W97SipbTeLMyxEZekC +yfPTo0/D/mabltNmacNKYzCki3gnY4wTLGvBC7kUBsM00v/nH15Tdtc3pg81L/1H +aVTR/rWG0Rjv5MX5tZPVlTmYonQEYxv201jJSZe9S4uitJjKhL90LWt1wK4btvxg +tBOyS7T5yjUKJ4nhJpV8jUdupBxHFAIDn8cvyfkCAwEAATANBgkqhkiG9w0BAQsF +AAOCAQEATPlVtR0/I+fy5iSmLKoBexQPC4utffCyppW+onoLCAetpKpCpsyYtb74 +FkefqCIyjcpjuKJJNnKVHGUr7hr3L3hDzybTxNu8LUpfioNPlbNjdowi29W3I1MX +2miDwylAL4F5X/hQkmJ8jxdLFdI2obcGqo7vzvryY25BRhT9H5VOcDYNlC/gvaN1 +exsv8bIyo1BdwVzcW0ucDRjXbbUNBkMM6J7LLh4X3ZvAxe62CQfrw3pUmeml+bi1 +IGSmA0QgJwtH+LVbqHlQfOhQFHrBr8SfrbyqDyqeRG13eaiwjqAczR902IHG1pev +ZOAqwqO3Vaf6yYh80iX3hFDKZ5QN+A== +-----END CERTIFICATE----- +-----BEGIN CERTIFICATE----- +MIIEAzCCAuugAwIBAgIUFCudUXwBqKUNreGC28n/HyRCLZowDQYJKoZIhvcNAQEL +BQAwgZAxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4GA1UEBwwHTWlk +bGFuZDETMBEGA1UECgwKSGVucnkgUHVtcDETMBEGA1UECwwKQXV0b21hdGlvbjEO +MAwGA1UEAwwFSFBJb1QxJTAjBgkqhkiG9w0BCQEWFm5vcmVwbHlAaGVucnktcHVt +cC5jb20wHhcNMTkxMTIwMTYwMDE3WhcNMjIwOTA5MTYwMDE3WjCBkDELMAkGA1UE +BhMCVVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdNaWRsYW5kMRMwEQYDVQQK +DApIZW5yeSBQdW1wMRMwEQYDVQQLDApBdXRvbWF0aW9uMQ4wDAYDVQQDDAVIUElv +VDElMCMGCSqGSIb3DQEJARYWbm9yZXBseUBoZW5yeS1wdW1wLmNvbTCCASIwDQYJ +KoZIhvcNAQEBBQADggEPADCCAQoCggEBAONzfIpip5r/jQuDH6T5RfETBUQz2fz6 +XgmzuMV6cxnpgbL+TTg6XUPmYirTpiT4n+uLzOmv3YlDJwvlci9VTBtrZngrS0iL +/izL1eL2cxIXlT8EVngR+f6JEuYN5ZGYsWrvEf7wJkqpeR99PJwmgoEwWEFDF1Ri +j6A/YuLEmJs8+Ox5ndj7fI7xU/5c2nBCayHpSQEXh9KAMIJ1oi9qAKVgQpczqXLl +h6tzlqyB2eQfSSSch6SjXMJ8z3H8m3QxTiVfk95LX0E16ufF0f5WDTAB6HFdSs3C +9MISDWkzTNt+ayl6WFi2tCHGUHstjrKpwKu0BSRij1FauoCmwIiti5sCAwEAAaNT +MFEwHQYDVR0OBBYEFPS+HjbxdMY+0FyHD8QGdKpYeXFOMB8GA1UdIwQYMBaAFPS+ +HjbxdMY+0FyHD8QGdKpYeXFOMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL +BQADggEBAK/rznXdYhm5cTJWfJn7oU1aaU3i0PDD9iL72kRqyaeKY0Be0iUDCXlB +zCnC3RVWD5RCnktU6RhxcvuOJhisOmr+nVDamk93771+D2Dc0ONCEMq6uRFjykYs +iV1V0DOYJ/G1pq9bXaKT9CGsLt0r9DKasy8+Bl/U5//MPYbunDGZO7MwwV9YZXns +BLGWsjlRRQEj2IPeIobygajhBn5KHLIfVp9iI5bg68Zpf0VScKFIzo7wej5bX5xV +hrlX48fFgM/M0Q2zGauVPAiY1aV4FctdmfstEjoaXAlkQQUsCDTdpTjIPrnLLvd1 +lqM/pJrHKTd2pLeRpFEtPWWTJt1Sff4= +-----END CERTIFICATE----- diff --git a/driver.py b/driver.py new file mode 100644 index 0000000..c5d34ad --- /dev/null +++ b/driver.py @@ -0,0 +1,143 @@ +import json +import time +from datetime import datetime as datetime +import logging +from logging.handlers import RotatingFileHandler +import sys +from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTClient +import threading +from data_points import plcDataPoint,modbusDataPoint,currentDataPoint,voltageDataPoint + + +def run(config, device, port, host, rootCAPath): + log_formatter = logging.Formatter('%(asctime)s %(levelname)s %(funcName)s(%(lineno)d) %(message)s') + log_file = './logs/{}.log'.format(device) + my_handler = RotatingFileHandler(log_file, mode='a', maxBytes=500*1024, backupCount=2, encoding=None, delay=0) + my_handler.setFormatter(log_formatter) + my_handler.setLevel(logging.INFO) + filelogger = logging.getLogger('{}'.format(device)) + filelogger.setLevel(logging.INFO) + filelogger.addHandler(my_handler) + + console_out = logging.StreamHandler(sys.stdout) + console_out.setFormatter(log_formatter) + filelogger.addHandler(console_out) + + filelogger.info("IN Driver") + filelogger.info("Got Config: \n{}".format(config)) + + #Extract data from passed in config + app = config['appname'] + company = config['company'] + field = config['field'] + locationID = config['locationID'] + #deviceType = config['deviceType'] + certificateID = config['certificateID'] + + #Build a topic and last will payload + dt_topic = "dt/{}/{}/{}/{}".format(app, company, field, locationID) + alm_topic = "alm/{}/{}/{}/{}".format(app,company, field, locationID) + lwtPayload = {"connected": 0} + #Generate a cert if needed + + #Configure connection to AWS IoT Core with proper certificate + myAWSIoTMQTTClient = None + myAWSIoTMQTTClient = AWSIoTMQTTClient(certificateID) + myAWSIoTMQTTClient.configureEndpoint(host, port) + myAWSIoTMQTTClient.configureCredentials(rootCAPath, './device1Cert.key', './device1CertAndCACert.pem') + myAWSIoTMQTTClient.configureLastWill(dt_topic,json.dumps(lwtPayload),1) + try: + myAWSIoTMQTTClient.connect() + connectedPayload = {"connected": 1} + myAWSIoTMQTTClient.publish(dt_topic, json.dumps(connectedPayload),1) + except Exception as e: + filelogger.info("Didn't connect: {}".format(e)) + #build data points loop through config and use a class to make a data point + #if plcdata != to empty then setup polls for tags + #use ping and reads as watchdog values for connectivity + #if modbusdata != to empty then setup polls for registers + #use reads as watchdog values for connectivity + #if currentdata != to empty then setup polls for current + #if raw current value > 3.5 then current is good else current disconnected + #if voltagedata != to empty then setup polls for voltage + #if raw voltage value > 0 then voltage is good else voltage disconnected + datapoints = [] + if not config["PLCData"] == "empty": + for key in config['PLCData'].keys(): + changeThreshold = config['PLCData'][key]["changeThreshold"] + guaranteed = config['PLCData'][key]["guaranteed"] + plcIP = config['PLCData'][key]["plcIP"] + plcType = config['PLCData'][key]["plcType"] + tag = config['PLCData'][key]["tag"] + name = config['PLCData'][key]["name"] + if "alert" in config['PLCData'][key].keys(): + threshold = config['PLCData'][key]["alert"]["threshold"] + condition = config['PLCData'][key]["alert"]["condition"] + response = config['PLCData'][key]["alert"]["response"] + contact = config['PLCData'][key]["alert"]["contact"] + datapoint = plcDataPoint(changeThreshold,guaranteed,str(name),plcIP=str(plcIP),plcType=str(plcType),tag=str(tag),alertThreshold=threshold,alertCondition=condition,alertResponse=response,alertContact=contact) + else: + datapoint = plcDataPoint(changeThreshold,guaranteed,str(name),plcIP=str(plcIP),plcType=str(plcType),tag=str(tag)) + datapoints.append(datapoint) + + if not config["modbusData"] == "empty": + pass + if not config["currentData"] == "empty": + pass + if not config["voltageData"] == "empty": + pass + + + #build alert points + #A function for polling general data can be latent no greater than a min between polls + #loop through list of data points to read and check value changes + #sleep for 30 secs + def dataCollection(): + while True: + message = {} + for datapoint in datapoints: + val,alertMessage = datapoint.read() + if alertMessage != None and not datapoint.alerted : + myAWSIoTMQTTClient.publish(alm_topic,json.dumps(alertMessage),1) + datapoint.alerted =True + if datapoint.checkSend(val): + message[datapoint.name] = val + if message: + message["timestamp"] = datetime.now().isoformat() + filelogger.info("Publishing: {}\nTo Topic: {}".format(message,dt_topic)) + myAWSIoTMQTTClient.publish(dt_topic, json.dumps(message),1) + time.sleep(5) + + #A function for polling alert data should be very near real time + #if plcdata != to empty then setup polls for tags + #use ping and reads as watchdog values for connectivity + #if modbusdata != to empty then setup polls for registers + #use reads as watchdog values for connectivity + #if currentdata != to empty then setup polls for current + #if raw current value > 3.5 then current is good else current disconnected + #if voltagedata != to empty then setup polls for voltage + #if raw voltage value > 0 then voltage is good else voltage disconnected + #sleep for 1 secs + def alertCollection(): + pass + #Start a thread for data and a thread for alerts + + + # list of all threads, so that they can be killed afterwards + all_threads = [] + + data_thread = threading.Thread(target=dataCollection, args=(), name="Thread-data") + data_thread.start() + all_threads.append(data_thread) + + alert_thread = threading.Thread(target=alertCollection, args=(), name="Thread-alerts") + alert_thread.start() + all_threads.append(alert_thread) + + + + for thread in all_threads: + thread.join() + + #myAWSIoTMQTTClient.disconnect() + diff --git a/driver.pyc b/driver.pyc new file mode 100644 index 0000000..8fe586e Binary files /dev/null and b/driver.pyc differ diff --git a/logs/device1.log b/logs/device1.log new file mode 100644 index 0000000..250aea1 --- /dev/null +++ b/logs/device1.log @@ -0,0 +1,93 @@ +2020-01-20 14:24:51,205 INFO run(26) IN Driver +2020-01-20 14:24:51,206 INFO run(27) Got Config: +{u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': u'empty', u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-20 14:26:34,220 INFO run(26) IN Driver +2020-01-20 14:26:34,222 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': u'empty', u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-20 15:08:28,235 INFO run(26) IN Driver +2020-01-20 15:08:28,236 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': u'empty', u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-20 15:09:18,894 INFO run(26) IN Driver +2020-01-20 15:09:18,895 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-20 15:10:19,977 INFO run(26) IN Driver +2020-01-20 15:10:19,979 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-21 13:16:52,980 INFO run(26) IN Driver +2020-01-21 13:16:52,981 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-21 13:18:28,723 INFO run(26) IN Driver +2020-01-21 13:18:28,724 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-21 13:21:45,694 INFO run(26) IN Driver +2020-01-21 13:21:45,695 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-21 13:23:56,621 INFO run(26) IN Driver +2020-01-21 13:23:56,622 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-21 13:24:25,281 INFO run(26) IN Driver +2020-01-21 13:24:25,283 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-21 13:40:43,424 INFO run(26) IN Driver +2020-01-21 13:40:43,427 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-21 13:41:20,835 INFO run(26) IN Driver +2020-01-21 13:41:20,836 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-21 13:50:21,215 INFO run(26) IN Driver +2020-01-21 13:50:21,217 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'henrypump', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': 0, u'currentData': u'empty'} +2020-01-21 13:50:21,739 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:21.739073', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/henrypump/inventory/0 +2020-01-21 13:50:26,876 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:26.876741', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/henrypump/inventory/0 +2020-01-21 13:50:32,065 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:32.065283', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/henrypump/inventory/0 +2020-01-21 13:50:37,202 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:37.201957', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/henrypump/inventory/0 +2020-01-21 13:50:42,385 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:42.385094', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/henrypump/inventory/0 +2020-01-21 13:50:47,523 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:47.523263', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/henrypump/inventory/0 +2020-01-21 13:50:52,667 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:52.667452', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/henrypump/inventory/0 +2020-01-21 13:50:57,811 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:50:57.811198', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/henrypump/inventory/0 +2020-01-21 13:51:02,953 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:51:02.953156', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/henrypump/inventory/0 +2020-01-21 13:54:00,990 INFO run(26) IN Driver +2020-01-21 13:54:00,992 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'QEP', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'North', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': u'POE 1', u'currentData': u'empty'} +2020-01-21 13:54:01,514 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:01.514449', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:54:06,701 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:06.701727', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:54:11,840 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:11.840273', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:54:16,969 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:16.969216', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:54:22,110 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:22.109787', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:54:27,253 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:27.253244', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:54:32,392 INFO dataCollection(97) Publishing: {'pond 1 height': 12.0, 'timestamp': '2020-01-21T13:54:32.392205', 'pond 2 height': -17.29999542236328} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:57:56,108 INFO run(26) IN Driver +2020-01-21 13:57:56,109 INFO run(27) Got Config: +{u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'appname': u'hpiot', u'company': u'QEP', u'modbusData': u'empty', u'PLCData': {u'tag1': {u'name': u'current', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'volumeflow', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'North', u'deviceType': u'inventory', u'voltageData': u'empty', u'locationID': u'POE 1', u'currentData': u'empty'} +2020-01-21 13:57:56,742 INFO dataCollection(97) Publishing: {'current': 12.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:57:56.742390'} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:58:01,878 INFO dataCollection(97) Publishing: {'current': 12.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:01.878045'} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:58:07,014 INFO dataCollection(97) Publishing: {'current': 15.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:07.013781'} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:58:12,198 INFO dataCollection(97) Publishing: {'current': 15.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:12.198353'} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:58:17,338 INFO dataCollection(97) Publishing: {'current': 27.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:17.338821'} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:58:22,468 INFO dataCollection(97) Publishing: {'current': 27.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:22.468762'} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:58:27,608 INFO dataCollection(97) Publishing: {'current': 27.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:27.608766'} +To Topic: dt/hpiot/QEP/North/POE 1 +2020-01-21 13:58:32,749 INFO dataCollection(97) Publishing: {'current': 27.0, 'volumeflow': -17.29999542236328, 'timestamp': '2020-01-21T13:58:32.749239'} +To Topic: dt/hpiot/QEP/North/POE 1 diff --git a/logs/test.log b/logs/test.log new file mode 100644 index 0000000..fe32651 --- /dev/null +++ b/logs/test.log @@ -0,0 +1,15 @@ +2020-01-21 13:30:40,848 INFO run(26) IN Driver +2020-01-21 13:30:40,849 INFO run(27) Got Config: +{u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'currentData': u'empty', u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'deviceType': u'inventory', u'locationID': 0, u'appname': u'hpiot', u'voltageData': u'empty', u'company': u'henrypump', u'modbusData': u'empty'} +2020-01-21 13:35:19,199 INFO run(26) IN Driver +2020-01-21 13:35:19,201 INFO run(27) Got Config: +{u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'deviceType': u'inventory', u'modbusData': u'empty', u'appname': u'hpiot', u'locationID': 0, u'company': u'henrypump', u'currentData': u'empty', u'voltageData': u'empty'} +2020-01-21 13:38:31,119 INFO run(26) IN Driver +2020-01-21 13:38:31,119 INFO run(26) IN Driver +2020-01-21 13:38:31,126 INFO run(27) Got Config: +{u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'deviceType': u'inventory', u'modbusData': u'empty', u'appname': u'hpiot', u'locationID': 0, u'company': u'henrypump', u'currentData': u'empty', u'voltageData': u'empty'} +2020-01-21 13:38:31,126 INFO run(27) Got Config: +{u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'deviceType': u'inventory', u'modbusData': u'empty', u'appname': u'hpiot', u'locationID': 0, u'company': u'henrypump', u'currentData': u'empty', u'voltageData': u'empty'} +2020-01-21 13:39:34,604 INFO run(26) IN Driver +2020-01-21 13:39:34,605 INFO run(27) Got Config: +{u'PLCData': {u'tag1': {u'name': u'pond 1 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond1Height', u'changeThreshold': 1}, u'tag2': {u'name': u'pond 2 height', u'plcType': u'Micro800', u'guaranteed': 3600, u'plcIP': u'192.168.1.12', u'tag': u'pond2Height', u'changeThreshold': 1}}, u'field': u'inventory', u'currentData': u'empty', u'certificateID': u'bfb15ea80f83b61a4ae3e5d43ed9519cb66380a8cfd2d784aaf9ace87bc275e4', u'deviceType': u'inventory', u'appname': u'hpiot', u'locationID': 0, u'company': u'henrypump', u'modbusData': u'empty', u'voltageData': u'empty'} diff --git a/main.py b/main.py new file mode 100644 index 0000000..2690918 --- /dev/null +++ b/main.py @@ -0,0 +1,127 @@ +from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTClient +import logging +import time +import argparse +import json +import os +from datetime import datetime +import urllib +import multiprocessing +import driver +import utilities +def main(): + + AllowedActions = ['both', 'publish', 'subscribe'] + + # Custom MQTT message callback + def customCallback(client, userdata, message): + print("Client: ") + print(client) + print("User Data: ") + print(userdata) + print("Received a new message: ") + print(message.payload) + print("from topic: ") + print(message.topic) + print("--------------\n\n") + + + # Read in command-line parameters + parser = argparse.ArgumentParser() + parser.add_argument("-e", "--endpoint", action="store", required=True, dest="host", help="Your AWS IoT custom endpoint") + parser.add_argument("-r", "--rootCA", action="store", required=True, dest="rootCAPath", help="Root CA file path") + parser.add_argument("-c", "--cert", action="store", dest="certificatePath", help="Certificate file path") + parser.add_argument("-k", "--key", action="store", dest="privateKeyPath", help="Private key file path") + parser.add_argument("-p", "--port", action="store", dest="port", type=int, help="Port number override") + parser.add_argument("-w", "--websocket", action="store_true", dest="useWebsocket", default=False, + help="Use MQTT over WebSocket") + parser.add_argument("-id", "--clientId", action="store", dest="clientId", default="basicPubSub", + help="Targeted client id") + parser.add_argument("-t", "--topic", action="store", dest="topic", default="dt/hpiot/", help="Targeted topic") + parser.add_argument("-m", "--mode", action="store", dest="mode", default="both", + help="Operation modes: %s"%str(AllowedActions)) + parser.add_argument("-M", "--message", action="store", dest="message", default="Hello World!", + help="Message to publish") + + args = parser.parse_args() + host = args.host + rootCAPath = args.rootCAPath + certificatePath = args.certificatePath + privateKeyPath = args.privateKeyPath + port = args.port + useWebsocket = args.useWebsocket + topic = args.topic + + def jitp_registration(): + #Attempt to connect to AWS IoT Core and start JITP for given certificate + myAWSIoTMQTTClient = None + myAWSIoTMQTTClient = AWSIoTMQTTClient(certificateID) + myAWSIoTMQTTClient.configureEndpoint(host, port) + myAWSIoTMQTTClient.configureCredentials(rootCAPath, './device1Cert.key', './device1CertAndCACert.pem') + while True: + try: + myAWSIoTMQTTClient.connect() + myAWSIoTMQTTClient.disconnect() + break + except Exception as e: + logger.info("Didn't connect trying again in 10 seconds: {}".format(e)) + time.sleep(10) + #Get the config that should be in the database after JITP concludes + return json.load(urllib.urlopen('https://4ax24ru9ra.execute-api.us-east-1.amazonaws.com/Gamma/HPIoTgetConfig/?certificateID={}'.format(certificateID))) + + # Configure logging + logger = logging.getLogger("AWSIoTPythonSDK.core") + logger.setLevel(logging.INFO) + streamHandler = logging.StreamHandler() + formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') + streamHandler.setFormatter(formatter) + logger.addHandler(streamHandler) + + #Checking for main device certificate or making it if absent + if not os.path.isfile('./device1Cert.pem'): + os.system('openssl genrsa -out device1Cert.key 2048') + os.system('openssl req -config server.conf -new -key device1Cert.key -out device1Cert.pem') + os.system('openssl x509 -req -in device1Cert.pem -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out device1Cert.pem -days 365 -sha256') + + if not os.path.isfile('./device1CertAndCACert.pem'): + os.system('cat device1Cert.pem rootCA.pem > device1CertAndCACert.pem') + + + certificateID = os.popen('openssl x509 -in device1Cert.pem -outform der | sha256sum').read()[:-4] + + #Download the config from dynamodb with API call + logger.info("Attempting to download config file") + config = {} + try: + config = json.load(urllib.urlopen('https://4ax24ru9ra.execute-api.us-east-1.amazonaws.com/Gamma/HPIoTgetConfig/?certificateID={}'.format(certificateID))) + except Exception as e: + logger.error(e) + + #No config in database probably haven't been registered attempt to connect and start JITP + if 'certificateID' not in config.keys(): + config = jitp_registration() + + #config = utilities.unmarshal_dynamodb_json(config) + + + print(config) + #Get all the device names from the config + devices = [ele for ele in config.keys() if('device' in ele)] + + #Build a list of all processes, so that they can be terminated afterwards + all_processes = [] + for device in devices: + driver.run(config[device],device,port, host, rootCAPath) + ''' + process = multiprocessing.Process(target=driver.run, args=(config[device],device,port, host, rootCAPath), name="Process-{}".format(config[device]['locationID'])) + process.start() + all_processes.append(process) + logger.info(all_processes) + for process in all_processes: + if process.exitcode: + process.terminate() + ''' +if __name__ == '__main__': + main() + + diff --git a/minimalmodbus.py b/minimalmodbus.py new file mode 100644 index 0000000..0c89d8b --- /dev/null +++ b/minimalmodbus.py @@ -0,0 +1,4028 @@ +# -*- coding: utf-8 -*- +# +# Copyright 2019 Jonas Berg +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +"""MinimalModbus: A Python driver for Modbus RTU/ASCII via serial port (via USB, RS485 or RS232).""" + +__author__ = "Jonas Berg" +__license__ = "Apache License, Version 2.0" +__status__ = "Production" +__url__ = "https://github.com/pyhys/minimalmodbus" +__version__ = "1.0.2" + + +import os +import struct +import sys +import time + +import serial + +if sys.version > "3": + import binascii + +# Allow long also in Python3 +# http://python3porting.com/noconv.html +if sys.version > "3": + long = int + +_NUMBER_OF_BYTES_BEFORE_REGISTERDATA = 1 # Within the payload +_NUMBER_OF_BYTES_PER_REGISTER = 2 +_MAX_NUMBER_OF_REGISTERS_TO_WRITE = 123 +_MAX_NUMBER_OF_REGISTERS_TO_READ = 125 +_MAX_NUMBER_OF_BITS_TO_WRITE = 1968 # 0x7B0 +_MAX_NUMBER_OF_BITS_TO_READ = 2000 # 0x7D0 +_MAX_NUMBER_OF_DECIMALS = 10 # Some instrument might store 0.00000154 Ampere as 154 etc +_MAX_BYTEORDER_VALUE = 3 +_SECONDS_TO_MILLISECONDS = 1000 +_BITS_PER_BYTE = 8 +_ASCII_HEADER = ":" +_ASCII_FOOTER = "\r\n" +_BYTEPOSITION_FOR_ASCII_HEADER = 0 # Relative to plain response +_BYTEPOSITION_FOR_SLAVEADDRESS = 0 # Relative to (stripped) response +_BYTEPOSITION_FOR_FUNCTIONCODE = 1 # Relative to (stripped) response +_BYTEPOSITION_FOR_SLAVE_ERROR_CODE = 2 # Relative to (stripped) response +_BITNUMBER_FUNCTIONCODE_ERRORINDICATION = 7 + +# Several instrument instances can share the same serialport +_serialports = {} # Key: port name (str), value: port instance +_latest_read_times = {} # Key: port name (str), value: timestamp (float) + +# ############### # +# Named constants # +# ############### # + +MODE_RTU = "rtu" +MODE_ASCII = "ascii" +BYTEORDER_BIG = 0 +BYTEORDER_LITTLE = 1 +BYTEORDER_BIG_SWAP = 2 +BYTEORDER_LITTLE_SWAP = 3 + +# Replace with enum when Python3 only +_PAYLOADFORMAT_BIT = "bit" +_PAYLOADFORMAT_BITS = "bits" +_PAYLOADFORMAT_FLOAT = "float" +_PAYLOADFORMAT_LONG = "long" +_PAYLOADFORMAT_REGISTER = "register" +_PAYLOADFORMAT_REGISTERS = "registers" +_PAYLOADFORMAT_STRING = "string" +_ALL_PAYLOADFORMATS = [ + _PAYLOADFORMAT_BIT, + _PAYLOADFORMAT_BITS, + _PAYLOADFORMAT_FLOAT, + _PAYLOADFORMAT_LONG, + _PAYLOADFORMAT_REGISTER, + _PAYLOADFORMAT_REGISTERS, + _PAYLOADFORMAT_STRING, +] + +# ######################## # +# Modbus instrument object # +# ######################## # + + +class Instrument: + """Instrument class for talking to instruments (slaves). + + Uses the Modbus RTU or ASCII protocols (via RS485 or RS232). + + Args: + * port (str): The serial port name, for example ``/dev/ttyUSB0`` (Linux), + ``/dev/tty.usbserial`` (OS X) or ``COM4`` (Windows). + * slaveaddress (int): Slave address in the range 1 to 247 (use decimal numbers, + not hex). Address 0 is for broadcast, and 248-255 are reserved. + * mode (str): Mode selection. Can be MODE_RTU or MODE_ASCII. + * close_port_after_each_call (bool): If the serial port should be closed after + each call to the instrument. + * debug (bool): Set this to :const:`True` to print the communication details + + """ + + def __init__( + self, + port, + slaveaddress, + mode=MODE_RTU, + close_port_after_each_call=False, + debug=False, + ): + """Initialize instrument and open corresponding serial port.""" + self.address = slaveaddress + """Slave address (int). Most often set by the constructor + (see the class documentation). """ + + self.mode = mode + """Slave mode (str), can be MODE_RTU or MODE_ASCII. + Most often set by the constructor (see the class documentation). + + Changing this will not affect how other instruments use the same serial port. + + New in version 0.6. + """ + + self.precalculate_read_size = True + """If this is :const:`False`, the serial port reads until timeout + instead of just reading a specific number of bytes. Defaults to :const:`True`. + + Changing this will not affect how other instruments use the same serial port. + + New in version 0.5. + """ + + self.debug = debug + """Set this to :const:`True` to print the communication details. + Defaults to :const:`False`. + + Most often set by the constructor (see the class documentation). + + Changing this will not affect how other instruments use the same serial port. + """ + + self.clear_buffers_before_each_transaction = True + """If this is :const:`True`, the serial port read and write buffers are + cleared before each request to the instrument, to avoid cumulative byte + sync errors across multiple messages. Defaults to :const:`True`. + + Changing this will not affect how other instruments use the same serial port. + + New in version 1.0. + """ + + self.close_port_after_each_call = close_port_after_each_call + """If this is :const:`True`, the serial port will be closed after each + call. Defaults to :const:`False`. + + Changing this will not affect how other instruments use the same serial port. + + Most often set by the constructor (see the class documentation). + """ + + self.handle_local_echo = False + """Set to to :const:`True` if your RS-485 adaptor has local echo enabled. + Then the transmitted message will immeadiately appear at the receive + line of the RS-485 adaptor. MinimalModbus will then read and discard + this data, before reading the data from the slave. + Defaults to :const:`False`. + + Changing this will not affect how other instruments use the same serial port. + + New in version 0.7. + """ + + self.serial = None + """The serial port object as defined by the pySerial module. Created by the constructor. + + Attributes that could be changed after initialisation: + + - port (str): Serial port name. + - Most often set by the constructor (see the class documentation). + - baudrate (int): Baudrate in Baud. + - Defaults to 19200. + - parity (probably int): Parity. See the pySerial module for documentation. + - Defaults to serial.PARITY_NONE. + - bytesize (int): Bytesize in bits. + - Defaults to 8. + - stopbits (int): The number of stopbits. + - Defaults to 1. + - timeout (float): Read timeout value in seconds. + - Defaults to 0.05 s. + - write_timeout (float): Write timeout value in seconds. + - Defaults to 2.0 s. + """ + + if port not in _serialports or not _serialports[port]: + self._print_debug("Create serial port {}".format(port)) + self.serial = _serialports[port] = serial.Serial( + port=port, + baudrate=19200, + parity=serial.PARITY_NONE, + bytesize=8, + stopbits=1, + timeout=0.05, + write_timeout=2.0, + ) + else: + self._print_debug("Serial port {} already exists".format(port)) + self.serial = _serialports[port] + if (self.serial.port is None) or (not self.serial.is_open): + self._print_debug("Serial port {} is closed. Opening.".format(port)) + self.serial.open() + + if self.close_port_after_each_call: + self._print_debug("Closing serial port {}".format(port)) + self.serial.close() + + def __repr__(self): + """Give string representation of the :class:`.Instrument` object.""" + template = ( + "{}.{}" + ) + return template.format( + self.__module__, + self.__class__.__name__, + id(self), + self.address, + self.mode, + self.close_port_after_each_call, + self.precalculate_read_size, + self.clear_buffers_before_each_transaction, + self.handle_local_echo, + self.debug, + self.serial, + ) + + def _print_debug(self, text): + if self.debug: + _print_out("MinimalModbus debug mode. " + text) + + # ################################# # + # Methods for talking to the slave # + # ################################# # + + def read_bit(self, registeraddress, functioncode=2): + """Read one bit from the slave (instrument). + + This is for a bit that has its individual address in the instrument. + + Args: + * registeraddress (int): The slave register address (use decimal numbers, not hex). + * functioncode (int): Modbus function code. Can be 1 or 2. + + Returns: + The bit value 0 or 1 (int). + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_functioncode(functioncode, [1, 2]) + return self._generic_command( + functioncode, + registeraddress, + number_of_bits=1, + payloadformat=_PAYLOADFORMAT_BIT, + ) + + def write_bit(self, registeraddress, value, functioncode=5): + """Write one bit to the slave (instrument). + + This is for a bit that has its individual address in the instrument. + + Args: + * registeraddress (int): The slave register address (use decimal numbers, not hex). + * value (int or bool): 0 or 1, or True or False + * functioncode (int): Modbus function code. Can be 5 or 15. + + Returns: + None + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_functioncode(functioncode, [5, 15]) + _check_int(value, minvalue=0, maxvalue=1, description="input value") + self._generic_command( + functioncode, + registeraddress, + value, + number_of_bits=1, + payloadformat=_PAYLOADFORMAT_BIT, + ) + + def read_bits(self, registeraddress, number_of_bits, functioncode=2): + """Read multiple bits from the slave (instrument). + + This is for bits that have individual addresses in the instrument. + + Args: + * registeraddress (int): The slave register start address (use decimal + numbers, not hex). + * number_of_bits (int): Number of bits to read + * functioncode (int): Modbus function code. Can be 1 or 2. + + Returns: + A list of bit values 0 or 1 (int). The first value in the list is for + the bit at the given address. + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_functioncode(functioncode, [1, 2]) + _check_int( + number_of_bits, + minvalue=1, + maxvalue=_MAX_NUMBER_OF_BITS_TO_READ, + description="number of bits", + ) + return self._generic_command( + functioncode, + registeraddress, + number_of_bits=number_of_bits, + payloadformat=_PAYLOADFORMAT_BITS, + ) + + def write_bits(self, registeraddress, values): + """Write multiple bits to the slave (instrument). + + This is for bits that have individual addresses in the instrument. + + Uses Modbus functioncode 15. + + Args: + * registeraddress (int): The slave register start address (use decimal + numbers, not hex). + * values (list of int or bool): 0 or 1, or True or False. The first + value in the list is for the bit at the given address. + + Returns: + None + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + if not isinstance(values, list): + raise TypeError( + 'The "values parameter" must be a list. Given: {0!r}'.format(values) + ) + # Note: The content of the list is checked at content conversion. + _check_int( + len(values), + minvalue=1, + maxvalue=_MAX_NUMBER_OF_BITS_TO_WRITE, + description="length of input list", + ) + + self._generic_command( + 15, + registeraddress, + values, + number_of_bits=len(values), + payloadformat=_PAYLOADFORMAT_BITS, + ) + + def read_register( + self, registeraddress, number_of_decimals=0, functioncode=3, signed=False + ): + """Read an integer from one 16-bit register in the slave, possibly scaling it. + + The slave register can hold integer values in the range 0 to 65535 + ("Unsigned INT16"). + + Args: + * registeraddress (int): The slave register address (use decimal numbers, not hex). + * number_of_decimals (int): The number of decimals for content conversion. + * functioncode (int): Modbus function code. Can be 3 or 4. + * signed (bool): Whether the data should be interpreted as unsigned or signed. + + .. note:: The parameter number_of_decimals was named numberOfDecimals + before MinimalModbus 1.0 + + If a value of 77.0 is stored internally in the slave register as 770, + then use ``number_of_decimals=1`` which will divide the received data by 10 + before returning the value. + + Similarly ``number_of_decimals=2`` will divide the received data by 100 before + returning the value. + + Some manufacturers allow negative values for some registers. Instead of + an allowed integer range 0 to 65535, a range -32768 to 32767 is allowed. + This is implemented as any received value in the upper range (32768 to + 65535) is interpreted as negative value (in the range -32768 to -1). + + Use the parameter ``signed=True`` if reading from a register that can hold + negative values. Then upper range data will be automatically converted into + negative return values (two's complement). + + ============== ================== ================ =============== + ``signed`` Data type in slave Alternative name Range + ============== ================== ================ =============== + :const:`False` Unsigned INT16 Unsigned short 0 to 65535 + :const:`True` INT16 Short -32768 to 32767 + ============== ================== ================ =============== + + Returns: + The register data in numerical value (int or float). + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_functioncode(functioncode, [3, 4]) + _check_int( + number_of_decimals, + minvalue=0, + maxvalue=_MAX_NUMBER_OF_DECIMALS, + description="number of decimals", + ) + _check_bool(signed, description="signed") + return self._generic_command( + functioncode, + registeraddress, + number_of_decimals=number_of_decimals, + number_of_registers=1, + signed=signed, + payloadformat=_PAYLOADFORMAT_REGISTER, + ) + + def write_register( + self, + registeraddress, + value, + number_of_decimals=0, + functioncode=16, + signed=False, + ): + """Write an integer to one 16-bit register in the slave, possibly scaling it. + + The slave register can hold integer values in the range 0 to + 65535 ("Unsigned INT16"). + + Args: + * registeraddress (int): The slave register address (use decimal + numbers, not hex). + * value (int or float): The value to store in the slave register (might be + scaled before sending). + * number_of_decimals (int): The number of decimals for content conversion. + * functioncode (int): Modbus function code. Can be 6 or 16. + * signed (bool): Whether the data should be interpreted as unsigned or signed. + + .. note:: The parameter number_of_decimals was named numberOfDecimals + before MinimalModbus 1.0 + + To store for example ``value=77.0``, use ``number_of_decimals=1`` if the slave register + will hold it as 770 internally. This will multiply ``value`` by 10 before sending it + to the slave register. + + Similarly ``number_of_decimals=2`` will multiply ``value`` by 100 before sending + it to the slave register. + + As the largest number that can be written to a register is 0xFFFF = 65535, + the ``value`` and ``number_of_decimals`` should max be 65535 when combined. + So when using ``number_of_decimals=3`` the maximum ``value`` is 65.535. + + For discussion on negative values, the range and on alternative names, + see :meth:`.read_register`. + + Use the parameter ``signed=True`` if writing to a register that can hold + negative values. Then negative input will be automatically converted into + upper range data (two's complement). + + Returns: + None + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_functioncode(functioncode, [6, 16]) + _check_int( + number_of_decimals, + minvalue=0, + maxvalue=_MAX_NUMBER_OF_DECIMALS, + description="number of decimals", + ) + _check_bool(signed, description="signed") + _check_numerical(value, description="input value") + + self._generic_command( + functioncode, + registeraddress, + value, + number_of_decimals=number_of_decimals, + number_of_registers=1, + signed=signed, + payloadformat=_PAYLOADFORMAT_REGISTER, + ) + + def read_long( + self, registeraddress, functioncode=3, signed=False, byteorder=BYTEORDER_BIG + ): + """Read a long integer (32 bits) from the slave. + + Long integers (32 bits = 4 bytes) are stored in two consecutive 16-bit + registers in the slave. + + Args: + * registeraddress (int): The slave register start address (use decimal numbers, + not hex). + * functioncode (int): Modbus function code. Can be 3 or 4. + * signed (bool): Whether the data should be interpreted as unsigned or signed. + * byteorder (int): How multi-register data should be interpreted. + Defaults to BYTEORDER_BIG. + + ============== ================== ================ ========================== + ``signed`` Data type in slave Alternative name Range + ============== ================== ================ ========================== + :const:`False` Unsigned INT32 Unsigned long 0 to 4294967295 + :const:`True` INT32 Long -2147483648 to 2147483647 + ============== ================== ================ ========================== + + Returns: + The numerical value (int). + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_functioncode(functioncode, [3, 4]) + _check_bool(signed, description="signed") + return self._generic_command( + functioncode, + registeraddress, + number_of_registers=2, + signed=signed, + byteorder=byteorder, + payloadformat=_PAYLOADFORMAT_LONG, + ) + + def write_long(self, registeraddress, value, signed=False, byteorder=BYTEORDER_BIG): + """Write a long integer (32 bits) to the slave. + + Long integers (32 bits = 4 bytes) are stored in two consecutive 16-bit + registers in the slave. + + Uses Modbus function code 16. + + For discussion on number of bits, number of registers, the range + and on alternative names, see :meth:`.read_long`. + + Args: + * registeraddress (int): The slave register start address (use decimal + numbers, not hex). + * value (int or long): The value to store in the slave. + * signed (bool): Whether the data should be interpreted as unsigned or signed. + * byteorder (int): How multi-register data should be interpreted. + Defaults to BYTEORDER_BIG. + + Returns: + None + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + MAX_VALUE_LONG = 4294967295 # Unsigned INT32 + MIN_VALUE_LONG = -2147483648 # INT32 + + _check_int( + value, + minvalue=MIN_VALUE_LONG, + maxvalue=MAX_VALUE_LONG, + description="input value", + ) + _check_bool(signed, description="signed") + self._generic_command( + 16, + registeraddress, + value, + number_of_registers=2, + signed=signed, + byteorder=byteorder, + payloadformat=_PAYLOADFORMAT_LONG, + ) + + def read_float( + self, + registeraddress, + functioncode=3, + number_of_registers=2, + byteorder=BYTEORDER_BIG, + ): + r"""Read a floating point number from the slave. + + Floats are stored in two or more consecutive 16-bit registers in the slave. + The encoding is according to the standard IEEE 754. + + There are differences in the byte order used by different manufacturers. + A floating point value of 1.0 is encoded (in single precision) as 3f800000 + (hex). In this implementation the data will be sent as ``'\x3f\x80'`` + and ``'\x00\x00'`` to two consecutetive registers by default. Make sure to test that + it makes sense for your instrument. If not, change the ``byteorder`` argument. + + Args: + * registeraddress (int): The slave register start address (use decimal + numbers, not hex). + * functioncode (int): Modbus function code. Can be 3 or 4. + * number_of_registers (int): The number of registers allocated for the float. + Can be 2 or 4. + * byteorder (int): How multi-register data should be interpreted. + Defaults to BYTEORDER_BIG. + + .. note:: The parameter number_of_registers was named numberOfRegisters + before MinimalModbus 1.0 + + ====================================== ================= =========== ================= + Type of floating point number in slave Size Registers Range + ====================================== ================= =========== ================= + Single precision (binary32) 32 bits (4 bytes) 2 registers 1.4E-45 to 3.4E38 + Double precision (binary64) 64 bits (8 bytes) 4 registers 5E-324 to 1.8E308 + ====================================== ================= =========== ================= + + Returns: + The numerical value (float). + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_functioncode(functioncode, [3, 4]) + _check_int( + number_of_registers, + minvalue=2, + maxvalue=4, + description="number of registers", + ) + return self._generic_command( + functioncode, + registeraddress, + number_of_registers=number_of_registers, + byteorder=byteorder, + payloadformat=_PAYLOADFORMAT_FLOAT, + ) + + def write_float( + self, registeraddress, value, number_of_registers=2, byteorder=BYTEORDER_BIG + ): + """Write a floating point number to the slave. + + Floats are stored in two or more consecutive 16-bit registers in the slave. + + Uses Modbus function code 16. + + For discussion on precision, number of registers and on byte order, + see :meth:`.read_float`. + + Args: + * registeraddress (int): The slave register start address (use decimal + numbers, not hex). + * value (float or int): The value to store in the slave + * number_of_registers (int): The number of registers allocated for the float. + Can be 2 or 4. + * byteorder (int): How multi-register data should be interpreted. + Defaults to BYTEORDER_BIG. + + .. note:: The parameter number_of_registers was named numberOfRegisters + before MinimalModbus 1.0 + + Returns: + None + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_numerical(value, description="input value") + _check_int( + number_of_registers, + minvalue=2, + maxvalue=4, + description="number of registers", + ) + self._generic_command( + 16, + registeraddress, + value, + number_of_registers=number_of_registers, + byteorder=byteorder, + payloadformat=_PAYLOADFORMAT_FLOAT, + ) + + def read_string(self, registeraddress, number_of_registers=16, functioncode=3): + """Read an ASCII string from the slave. + + Each 16-bit register in the slave are interpreted as two characters + (each 1 byte = 8 bits). For example 16 consecutive registers can hold 32 + characters (32 bytes). + + International characters (Unicode/UTF-8) are not supported. + + Args: + * registeraddress (int): The slave register start address (use decimal + numbers, not hex). + * number_of_registers (int): The number of registers allocated for the string. + * functioncode (int): Modbus function code. Can be 3 or 4. + + .. note:: The parameter number_of_registers was named numberOfRegisters + before MinimalModbus 1.0 + + Returns: + The string (str). + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_functioncode(functioncode, [3, 4]) + _check_int( + number_of_registers, + minvalue=1, + maxvalue=_MAX_NUMBER_OF_REGISTERS_TO_READ, + description="number of registers for read string", + ) + return self._generic_command( + functioncode, + registeraddress, + number_of_registers=number_of_registers, + payloadformat=_PAYLOADFORMAT_STRING, + ) + + def write_string(self, registeraddress, textstring, number_of_registers=16): + """Write an ASCII string to the slave. + + Each 16-bit register in the slave are interpreted as two characters + (each 1 byte = 8 bits). For example 16 consecutive registers can hold 32 + characters (32 bytes). + + Uses Modbus function code 16. + + International characters (Unicode/UTF-8) are not supported. + + Args: + * registeraddress (int): The slave register start address (use decimal + numbers, not hex). + * textstring (str): The string to store in the slave, must be ASCII. + * number_of_registers (int): The number of registers allocated for the string. + + .. note:: The parameter number_of_registers was named numberOfRegisters + before MinimalModbus 1.0 + + If the ``textstring`` is longer than the ``2*number_of_registers``, an error is raised. + Shorter strings are padded with spaces. + + Returns: + None + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_int( + number_of_registers, + minvalue=1, + maxvalue=_MAX_NUMBER_OF_REGISTERS_TO_WRITE, + description="number of registers for write string", + ) + _check_string( + textstring, + "input string", + minlength=1, + maxlength=2 * number_of_registers, + force_ascii=True, + ) + self._generic_command( + 16, + registeraddress, + textstring, + number_of_registers=number_of_registers, + payloadformat=_PAYLOADFORMAT_STRING, + ) + + def read_registers(self, registeraddress, number_of_registers, functioncode=3): + """Read integers from 16-bit registers in the slave. + + The slave registers can hold integer values in the range 0 to + 65535 ("Unsigned INT16"). + + Args: + * registeraddress (int): The slave register start address (use decimal + numbers, not hex). + * number_of_registers (int): The number of registers to read, max 125 registers. + * functioncode (int): Modbus function code. Can be 3 or 4. + + .. note:: The parameter number_of_registers was named numberOfRegisters + before MinimalModbus 1.0 + + Any scaling of the register data, or converting it to negative number + (two's complement) must be done manually. + + Returns: + The register data (a list of int). The first value in the list is for + the register at the given address. + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + _check_functioncode(functioncode, [3, 4]) + _check_int( + number_of_registers, + minvalue=1, + maxvalue=_MAX_NUMBER_OF_REGISTERS_TO_READ, + description="number of registers", + ) + return self._generic_command( + functioncode, + registeraddress, + number_of_registers=number_of_registers, + payloadformat=_PAYLOADFORMAT_REGISTERS, + ) + + def write_registers(self, registeraddress, values): + """Write integers to 16-bit registers in the slave. + + The slave register can hold integer values in the range 0 to + 65535 ("Unsigned INT16"). + + Uses Modbus function code 16. + + The number of registers that will be written is defined by the length of + the ``values`` list. + + Args: + * registeraddress (int): The slave register start address (use decimal + numbers, not hex). + * values (list of int): The values to store in the slave registers, + max 123 values. The first value in the list is for the register + at the given address. + + .. note:: The parameter number_of_registers was named numberOfRegisters + before MinimalModbus 1.0 + + Any scaling of the register data, or converting it to negative number + (two's complement) must be done manually. + + Returns: + None + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + if not isinstance(values, list): + raise TypeError( + 'The "values parameter" must be a list. Given: {0!r}'.format(values) + ) + _check_int( + len(values), + minvalue=1, + maxvalue=_MAX_NUMBER_OF_REGISTERS_TO_WRITE, + description="length of input list", + ) + # Note: The content of the list is checked at content conversion. + + self._generic_command( + 16, + registeraddress, + values, + number_of_registers=len(values), + payloadformat=_PAYLOADFORMAT_REGISTERS, + ) + + # ############### # + # Generic command # + # ############### # + + def _generic_command( + self, + functioncode, + registeraddress, + value=None, + number_of_decimals=0, + number_of_registers=0, + number_of_bits=0, + signed=False, + byteorder=BYTEORDER_BIG, + payloadformat=None, + ): + """Perform generic command for reading and writing registers and bits. + + Args: + * functioncode (int): Modbus function code. + * registeraddress (int): The register address (use decimal numbers, not hex). + * value (numerical or string or None or list of int): The value to store + in the register. Depends on payloadformat. + * number_of_decimals (int): The number of decimals for content conversion. + Only for a single register. + * number_of_registers (int): The number of registers to read/write. + Only certain values allowed, depends on payloadformat. + * number_of_bits (int):T he number of registers to read/write. + * signed (bool): Whether the data should be interpreted as unsigned or + signed. Only for a single register or for payloadformat='long'. + * byteorder (int): How multi-register data should be interpreted. + * payloadformat (None or string): Any of the _PAYLOADFORMAT_* values + + If a value of 77.0 is stored internally in the slave register as 770, + then use ``number_of_decimals=1`` which will divide the received data + from the slave by 10 before returning the value. Similarly + ``number_of_decimals=2`` will divide the received data by 100 before returning + the value. Same functionality is also used when writing data to the slave. + + Returns: + The register data in numerical value (int or float), or the bit value 0 or + 1 (int), or ``None``. + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + """ + ALL_ALLOWED_FUNCTIONCODES = [1, 2, 3, 4, 5, 6, 15, 16] + ALLOWED_FUNCTIONCODES = {} + ALLOWED_FUNCTIONCODES[_PAYLOADFORMAT_BIT] = [1, 2, 5, 15] + ALLOWED_FUNCTIONCODES[_PAYLOADFORMAT_BITS] = [1, 2, 15] + ALLOWED_FUNCTIONCODES[_PAYLOADFORMAT_REGISTER] = [3, 4, 6, 16] + ALLOWED_FUNCTIONCODES[_PAYLOADFORMAT_FLOAT] = [3, 4, 16] + ALLOWED_FUNCTIONCODES[_PAYLOADFORMAT_STRING] = [3, 4, 16] + ALLOWED_FUNCTIONCODES[_PAYLOADFORMAT_LONG] = [3, 4, 16] + ALLOWED_FUNCTIONCODES[_PAYLOADFORMAT_REGISTERS] = [3, 4, 16] + + # Check input values + _check_functioncode(functioncode, ALL_ALLOWED_FUNCTIONCODES) + _check_registeraddress(registeraddress) + _check_int( + number_of_decimals, + minvalue=0, + maxvalue=_MAX_NUMBER_OF_DECIMALS, + description="number of decimals", + ) + _check_int( + number_of_registers, + minvalue=0, + maxvalue=max( + _MAX_NUMBER_OF_REGISTERS_TO_READ, _MAX_NUMBER_OF_REGISTERS_TO_WRITE + ), + description="number of registers", + ) + _check_int( + number_of_bits, + minvalue=0, + maxvalue=max(_MAX_NUMBER_OF_BITS_TO_READ, _MAX_NUMBER_OF_BITS_TO_WRITE), + description="number of bits", + ) + _check_bool(signed, description="signed") + _check_int( + byteorder, + minvalue=0, + maxvalue=_MAX_BYTEORDER_VALUE, + description="byteorder", + ) + + if payloadformat not in _ALL_PAYLOADFORMATS: + if not isinstance(payloadformat, str): + raise TypeError( + "The payload format should be a string. Given: {!r}".format( + payloadformat + ) + ) + raise ValueError( + "Wrong payload format variable. Given: {!r}".format(payloadformat) + ) + + number_of_register_bytes = number_of_registers * _NUMBER_OF_BYTES_PER_REGISTER + + # Check combinations: Payload format and functioncode + if functioncode not in ALLOWED_FUNCTIONCODES[payloadformat]: + raise ValueError( + "Wrong functioncode for payloadformat " + + "{!r}. Given: {!r}.".format(payloadformat, functioncode) + ) + + # Check combinations: signed + if signed: + if payloadformat not in [_PAYLOADFORMAT_REGISTER, _PAYLOADFORMAT_LONG]: + raise ValueError( + 'The "signed" parameter can not be used for this payload format. ' + + "Given format: {!r}.".format(payloadformat) + ) + + # Check combinations: number_of_decimals + if number_of_decimals > 0: + if payloadformat != _PAYLOADFORMAT_REGISTER: + raise ValueError( + 'The "number_of_decimals" parameter can not be used for this payload format. ' + + "Given format: {0!r}.".format(payloadformat) + ) + + # Check combinations: byteorder + if byteorder: + if payloadformat not in [_PAYLOADFORMAT_FLOAT, _PAYLOADFORMAT_LONG]: + raise ValueError( + 'The "byteorder" parameter can not be used for this payload format. ' + + "Given format: {0!r}.".format(payloadformat) + ) + + # Check combinations: number of bits + if payloadformat == _PAYLOADFORMAT_BIT: + if number_of_bits != 1: + raise ValueError( + "For BIT payload format the number of bits should be 1. " + + "Given: {0!r}.".format(number_of_bits) + ) + elif payloadformat == _PAYLOADFORMAT_BITS: + if number_of_bits < 1: + raise ValueError( + "For BITS payload format the number of bits should be at least 1. " + + "Given: {0!r}.".format(number_of_bits) + ) + elif number_of_bits: + raise ValueError( + "The number_of_bits parameter is wrong for payload format " + + "{0!r}. Given: {0!r}.".format(payloadformat, number_of_bits) + ) + + # Check combinations: Number of registers + if functioncode in [1, 2, 5, 15] and number_of_registers: + raise ValueError( + "The number_of_registers is not valid for this function code. " + + "number_of_registers: {0!r}, functioncode {1}.".format( + number_of_registers, functioncode + ) + ) + elif functioncode in [3, 4, 16] and not number_of_registers: + raise ValueError( + "The number_of_registers must be > 0 for functioncode " + + "{}.".format(functioncode) + ) + elif functioncode == 6 and number_of_registers != 1: + raise ValueError( + "The number_of_registers must be 1 for functioncode 6. " + + "Given: {}.".format(number_of_registers) + ) + if ( + functioncode == 16 + and payloadformat == _PAYLOADFORMAT_REGISTER + and number_of_registers != 1 + ): + raise ValueError( + "Wrong number_of_registers when writing to a " + + "single register. Given {0!r}.".format(number_of_registers) + ) + # Note: For function code 16 there is checking also in the content + # conversion functions. + + # Check combinations: Value + if functioncode in [5, 6, 15, 16] and value is None: + raise ValueError( + "The input value must be given for this function code. " + + "Given {0!r} and {1}.".format(value, functioncode) + ) + elif functioncode in [1, 2, 3, 4] and value is not None: + raise ValueError( + "The input value should not be given for this function code. " + + "Given {0!r} and {1}.".format(value, functioncode) + ) + + # Check combinations: Value for numerical + if functioncode == 16 and payloadformat in [ + _PAYLOADFORMAT_REGISTER, + _PAYLOADFORMAT_FLOAT, + _PAYLOADFORMAT_LONG, + ]: + _check_numerical(value, description="input value") + if functioncode == 6 and payloadformat == _PAYLOADFORMAT_REGISTER: + _check_numerical(value, description="input value") + + # Check combinations: Value for string + if functioncode == 16 and payloadformat == _PAYLOADFORMAT_STRING: + _check_string( + value, "input string", minlength=1, maxlength=number_of_register_bytes + ) + # Note: The string might be padded later, so the length might be shorter + # than number_of_register_bytes. + + # Check combinations: Value for registers + if functioncode == 16 and payloadformat == _PAYLOADFORMAT_REGISTERS: + if not isinstance(value, list): + raise TypeError( + "The value parameter for payloadformat REGISTERS must be a list. " + + "Given {0!r}.".format(value) + ) + + if len(value) != number_of_registers: + raise ValueError( + "The list length does not match number of registers. " + + "List: {0!r}, Number of registers: {1!r}.".format( + value, number_of_registers + ) + ) + + # Check combinations: Value for bit + if functioncode in [5, 15] and payloadformat == _PAYLOADFORMAT_BIT: + _check_int( + value, + minvalue=0, + maxvalue=1, + description="input value for payload format BIT", + ) + + # Check combinations: Value for bits + if functioncode == 15 and payloadformat == _PAYLOADFORMAT_BITS: + if not isinstance(value, list): + raise TypeError( + "The value parameter for payloadformat BITS must be a list. " + + "Given {0!r}.".format(value) + ) + + if len(value) != number_of_bits: + raise ValueError( + "The list length does not match number of bits. " + + "List: {0!r}, Number of registers: {1!r}.".format( + value, number_of_registers + ) + ) + + # Create payload + payload_to_slave = _create_payload( + functioncode, + registeraddress, + value, + number_of_decimals, + number_of_registers, + number_of_bits, + signed, + byteorder, + payloadformat, + ) + + # Communicate with instrument + payload_from_slave = self._perform_command(functioncode, payload_to_slave) + + # Parse response payload + return _parse_payload( + payload_from_slave, + functioncode, + registeraddress, + value, + number_of_decimals, + number_of_registers, + number_of_bits, + signed, + byteorder, + payloadformat, + ) + + # #################################### # + # Communication implementation details # + # #################################### # + + def _perform_command(self, functioncode, payload_to_slave): + """Perform the command having the *functioncode*. + + Args: + * functioncode (int): The function code for the command to be performed. + Can for example be 'Write register' = 16. + * payload_to_slave (str): Data to be transmitted to the slave (will be + embedded in slaveaddress, CRC etc) + + Returns: + The extracted data payload from the slave (a string). It has been + stripped of CRC etc. + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + Makes use of the :meth:`_communicate` method. The request is generated + with the :func:`_embed_payload` function, and the parsing of the + response is done with the :func:`_extract_payload` function. + + """ + DEFAULT_NUMBER_OF_BYTES_TO_READ = 1000 + + _check_functioncode(functioncode, None) + _check_string(payload_to_slave, description="payload") + + # Build request + request = _embed_payload( + self.address, self.mode, functioncode, payload_to_slave + ) + + # Calculate number of bytes to read + number_of_bytes_to_read = DEFAULT_NUMBER_OF_BYTES_TO_READ + if self.precalculate_read_size: + try: + number_of_bytes_to_read = _predict_response_size( + self.mode, functioncode, payload_to_slave + ) + except Exception: + if self.debug: + template = ( + "Could not precalculate response size for Modbus {} mode. " + + "Will read {} bytes. Request: {!r}" + ) + self._print_debug( + template.format(self.mode, number_of_bytes_to_read, request) + ) + + # Communicate + response = self._communicate(request, number_of_bytes_to_read) + + # Extract payload + payload_from_slave = _extract_payload( + response, self.address, self.mode, functioncode + ) + return payload_from_slave + + def _communicate(self, request, number_of_bytes_to_read): + """Talk to the slave via a serial port. + + Args: + request (str): The raw request that is to be sent to the slave. + number_of_bytes_to_read (int): number of bytes to read + + Returns: + The raw data (string) returned from the slave. + + Raises: + TypeError, ValueError, ModbusException, + serial.SerialException (inherited from IOError) + + Note that the answer might have strange ASCII control signs, which + makes it difficult to print it in the promt (messes up a bit). + Use repr() to make the string printable (shows ASCII values for control signs.) + + Will block until reaching *number_of_bytes_to_read* or timeout. + + If the attribute :attr:`Instrument.debug` is :const:`True`, the communication + details are printed. + + If the attribute :attr:`Instrument.close_port_after_each_call` is :const:`True` the + serial port is closed after each call. + + Timing:: + + Request from master (Master is writing) + | + | Response from slave (Master is reading) + | | + --------R-------W-----------------------------R-------W----------------------------- + | | | + | |<------- Roundtrip time ------>| + | | + -->|-----|<----- Silent period + + The resolution for Python's time.time() is lower on Windows than on Linux. + It is about 16 ms on Windows according to + https://stackoverflow.com/questions/157359/accurate-timestamping-in-python-logging + + For Python3, the information sent to and from pySerial should be of the type bytes. + This is taken care of automatically by MinimalModbus. + + """ + _check_string(request, minlength=1, description="request") + _check_int(number_of_bytes_to_read) + + self._print_debug( + "Will write to instrument (expecting {} bytes back): {!r} ({})".format( + number_of_bytes_to_read, request, _hexlify(request) + ) + ) + + if not self.serial.is_open: + self._print_debug("Opening port {}".format(self.serial.port)) + self.serial.open() + + if self.clear_buffers_before_each_transaction: + self._print_debug( + "Clearing serial buffers for port {}".format(self.serial.port) + ) + self.serial.reset_input_buffer() + self.serial.reset_output_buffer() + + if sys.version_info[0] > 2: + request = bytes( + request, encoding="latin1" + ) # Convert types to make it Python3 compatible + + # Sleep to make sure 3.5 character times have passed + minimum_silent_period = _calculate_minimum_silent_period(self.serial.baudrate) + time_since_read = _now() - _latest_read_times.get(self.serial.port, 0) + + if time_since_read < minimum_silent_period: + sleep_time = minimum_silent_period - time_since_read + + if self.debug: + template = ( + "Sleeping {:.2f} ms before sending. " + + "Minimum silent period: {:.2f} ms, time since read: {:.2f} ms." + ) + text = template.format( + sleep_time * _SECONDS_TO_MILLISECONDS, + minimum_silent_period * _SECONDS_TO_MILLISECONDS, + time_since_read * _SECONDS_TO_MILLISECONDS, + ) + self._print_debug(text) + + time.sleep(sleep_time) + + elif self.debug: + template = ( + "No sleep required before write. " + + "Time since previous read: {:.2f} ms, minimum silent period: {:.2f} ms." + ) + text = template.format( + time_since_read * _SECONDS_TO_MILLISECONDS, + minimum_silent_period * _SECONDS_TO_MILLISECONDS, + ) + self._print_debug(text) + + # Write request + latest_write_time = _now() + self.serial.write(request) + + # Read and discard local echo + if self.handle_local_echo: + local_echo_to_discard = self.serial.read(len(request)) + if self.debug: + template = "Discarding this local echo: {!r} ({} bytes)." + text = template.format( + local_echo_to_discard, len(local_echo_to_discard) + ) + self._print_debug(text) + if local_echo_to_discard != request: + template = ( + "Local echo handling is enabled, but the local echo does " + + "not match the sent request. " + + "Request: {!r} ({} bytes), local echo: {!r} ({} bytes)." + ) + text = template.format( + request, + len(request), + local_echo_to_discard, + len(local_echo_to_discard), + ) + raise LocalEchoError(text) + + # Read response + answer = self.serial.read(number_of_bytes_to_read) + _latest_read_times[self.serial.port] = _now() + + if self.close_port_after_each_call: + self._print_debug("Closing port {}".format(self.serial.port)) + self.serial.close() + + if sys.version_info[0] > 2: + # Convert types to make it Python3 compatible + answer = str(answer, encoding="latin1") + + if self.debug: + template = ( + "Response from instrument: {!r} ({}) ({} bytes), " + + "roundtrip time: {:.1f} ms. Timeout for reading: {:.1f} ms.\n" + ) + text = template.format( + answer, + _hexlify(answer), + len(answer), + (_latest_read_times.get(self.serial.port, 0) - latest_write_time) + * _SECONDS_TO_MILLISECONDS, + self.serial.timeout * _SECONDS_TO_MILLISECONDS, + ) + self._print_debug(text) + + if not answer: + raise NoResponseError("No communication with the instrument (no answer)") + + return answer + + # For backward compatibility + _performCommand = _perform_command + + +# ########## # +# Exceptions # +# ########## # + + +class ModbusException(IOError): + """Base class for Modbus communication exceptions. + + Inherits from IOError, which is an alias for OSError in Python3. + """ + + +class SlaveReportedException(ModbusException): + """Base class for exceptions that the slave (instrument) reports.""" + + +class SlaveDeviceBusyError(SlaveReportedException): + """The slave is busy processing some command.""" + + +class NegativeAcknowledgeError(SlaveReportedException): + """The slave can not fulfil the programming request. + + This typically happens when using function code 13 or 14 decimal. + """ + + +class IllegalRequestError(SlaveReportedException): + """The slave has received an illegal request.""" + + +class MasterReportedException(ModbusException): + """Base class for exceptions that the master (computer) detects.""" + + +class NoResponseError(MasterReportedException): + """No response from the slave.""" + + +class LocalEchoError(MasterReportedException): + """There is some problem with the local echo.""" + + +class InvalidResponseError(MasterReportedException): + """The response does not fulfill the Modbus standad, for example wrong checksum.""" + + +# ################ # +# Payload handling # +# ################ # + + +def _create_payload( + functioncode, + registeraddress, + value, + number_of_decimals, + number_of_registers, + number_of_bits, + signed, + byteorder, + payloadformat, +): + """Create the payload. + + Error checking should have been done before calling this function. + + For argument descriptions, see the _generic_command() method. + + """ + if functioncode in [1, 2]: + return _num_to_twobyte_string(registeraddress) + _num_to_twobyte_string( + number_of_bits + ) + if functioncode in [3, 4]: + return _num_to_twobyte_string(registeraddress) + _num_to_twobyte_string( + number_of_registers + ) + if functioncode == 5: + return _num_to_twobyte_string(registeraddress) + _bit_to_bytestring(value) + if functioncode == 6: + return _num_to_twobyte_string(registeraddress) + _num_to_twobyte_string( + value, number_of_decimals, signed=signed + ) + if functioncode == 15: + if payloadformat == _PAYLOADFORMAT_BIT: + bitlist = [value] + else: + bitlist = value + return ( + _num_to_twobyte_string(registeraddress) + + _num_to_twobyte_string(number_of_bits) + + _num_to_onebyte_string( + _calculate_number_of_bytes_for_bits(number_of_bits) + ) + + _bits_to_bytestring(bitlist) + ) + if functioncode == 16: + if payloadformat == _PAYLOADFORMAT_REGISTER: + registerdata = _num_to_twobyte_string( + value, number_of_decimals, signed=signed + ) + elif payloadformat == _PAYLOADFORMAT_STRING: + registerdata = _textstring_to_bytestring(value, number_of_registers) + elif payloadformat == _PAYLOADFORMAT_LONG: + registerdata = _long_to_bytestring( + value, signed, number_of_registers, byteorder + ) + elif payloadformat == _PAYLOADFORMAT_FLOAT: + registerdata = _float_to_bytestring(value, number_of_registers, byteorder) + elif payloadformat == _PAYLOADFORMAT_REGISTERS: + registerdata = _valuelist_to_bytestring(value, number_of_registers) + + assert len(registerdata) == number_of_registers * _NUMBER_OF_BYTES_PER_REGISTER + + return ( + _num_to_twobyte_string(registeraddress) + + _num_to_twobyte_string(number_of_registers) + + _num_to_onebyte_string(len(registerdata)) + + registerdata + ) + raise ValueError("Wrong function code: " + str(functioncode)) + + +def _parse_payload( + payload, + functioncode, + registeraddress, + value, + number_of_decimals, + number_of_registers, + number_of_bits, + signed, + byteorder, + payloadformat, +): + _check_response_payload( + payload, + functioncode, + registeraddress, + value, + number_of_decimals, + number_of_registers, + number_of_bits, + signed, + byteorder, + payloadformat, + ) + + if functioncode in [1, 2]: + registerdata = payload[_NUMBER_OF_BYTES_BEFORE_REGISTERDATA:] + if payloadformat == _PAYLOADFORMAT_BIT: + return _bytestring_to_bits(registerdata, number_of_bits)[0] + elif payloadformat == _PAYLOADFORMAT_BITS: + return _bytestring_to_bits(registerdata, number_of_bits) + + if functioncode in [3, 4]: + registerdata = payload[_NUMBER_OF_BYTES_BEFORE_REGISTERDATA:] + if payloadformat == _PAYLOADFORMAT_STRING: + return _bytestring_to_textstring(registerdata, number_of_registers) + + elif payloadformat == _PAYLOADFORMAT_LONG: + return _bytestring_to_long( + registerdata, signed, number_of_registers, byteorder + ) + + elif payloadformat == _PAYLOADFORMAT_FLOAT: + return _bytestring_to_float(registerdata, number_of_registers, byteorder) + + elif payloadformat == _PAYLOADFORMAT_REGISTERS: + return _bytestring_to_valuelist(registerdata, number_of_registers) + + elif payloadformat == _PAYLOADFORMAT_REGISTER: + return _twobyte_string_to_num( + registerdata, number_of_decimals, signed=signed + ) + + +def _embed_payload(slaveaddress, mode, functioncode, payloaddata): + """Build a request from the slaveaddress, the function code and the payload data. + + Args: + * slaveaddress (int): The address of the slave. + * mode (str): The modbus protcol mode (MODE_RTU or MODE_ASCII) + * functioncode (int): The function code for the command to be performed. + Can for example be 16 (Write register). + * payloaddata (str): The byte string to be sent to the slave. + + Returns: + The built (raw) request string for sending to the slave (including CRC etc). + + Raises: + ValueError, TypeError. + + The resulting request has the format: + * RTU Mode: slaveaddress byte + functioncode byte + payloaddata + CRC (which is two bytes). + * ASCII Mode: header (:) + slaveaddress (2 characters) + functioncode + (2 characters) + payloaddata + LRC (which is two characters) + footer (CRLF) + + The LRC or CRC is calculated from the byte string made up of slaveaddress + + functioncode + payloaddata. + The header, LRC/CRC, and footer are excluded from the calculation. + + """ + _check_slaveaddress(slaveaddress) + _check_mode(mode) + _check_functioncode(functioncode, None) + _check_string(payloaddata, description="payload") + + first_part = ( + _num_to_onebyte_string(slaveaddress) + + _num_to_onebyte_string(functioncode) + + payloaddata + ) + + if mode == MODE_ASCII: + request = ( + _ASCII_HEADER + + _hexencode(first_part) + + _hexencode(_calculate_lrc_string(first_part)) + + _ASCII_FOOTER + ) + else: + request = first_part + _calculate_crc_string(first_part) + + return request + + +def _extract_payload(response, slaveaddress, mode, functioncode): + """Extract the payload data part from the slave's response. + + Args: + * response (str): The raw response byte string from the slave. + This is different for RTU and ASCII. + * slaveaddress (int): The adress of the slave. Used here for error checking only. + * mode (str): The modbus protcol mode (MODE_RTU or MODE_ASCII) + * functioncode (int): Used here for error checking only. + + Returns: + The payload part of the *response* string. Conversion from Modbus ASCII + has been done if applicable. + + Raises: + ValueError, TypeError, ModbusException (or subclasses). + + Raises an exception if there is any problem with the received address, + the functioncode or the CRC. + + The received response should have the format: + + * RTU Mode: slaveaddress byte + functioncode byte + payloaddata + CRC (which is two bytes) + * ASCII Mode: header (:) + slaveaddress byte + functioncode byte + + payloaddata + LRC (which is two characters) + footer (CRLF) + + For development purposes, this function can also be used to extract the payload + from the request sent TO the slave. + + """ + # Number of bytes before the response payload (in stripped response) + NUMBER_OF_RESPONSE_STARTBYTES = 2 + + NUMBER_OF_CRC_BYTES = 2 + NUMBER_OF_LRC_BYTES = 1 + MINIMAL_RESPONSE_LENGTH_RTU = NUMBER_OF_RESPONSE_STARTBYTES + NUMBER_OF_CRC_BYTES + MINIMAL_RESPONSE_LENGTH_ASCII = 9 + + # Argument validity testing (ValueError/TypeError at lib programming error) + _check_string(response, description="response") + _check_slaveaddress(slaveaddress) + _check_mode(mode) + _check_functioncode(functioncode, None) + + plainresponse = response + + # Validate response length + if mode == MODE_ASCII: + if len(response) < MINIMAL_RESPONSE_LENGTH_ASCII: + raise InvalidResponseError( + "Too short Modbus ASCII response (minimum length {} bytes). Response: {!r}".format( + MINIMAL_RESPONSE_LENGTH_ASCII, response + ) + ) + elif len(response) < MINIMAL_RESPONSE_LENGTH_RTU: + raise InvalidResponseError( + "Too short Modbus RTU response (minimum length {} bytes). Response: {!r}".format( + MINIMAL_RESPONSE_LENGTH_RTU, response + ) + ) + + if mode == MODE_ASCII: + + # Validate the ASCII header and footer. + if response[_BYTEPOSITION_FOR_ASCII_HEADER] != _ASCII_HEADER: + raise InvalidResponseError( + "Did not find header " + + "({!r}) as start of ASCII response. The plain response is: {!r}".format( + _ASCII_HEADER, response + ) + ) + elif response[-len(_ASCII_FOOTER) :] != _ASCII_FOOTER: + raise InvalidResponseError( + "Did not find footer " + + "({!r}) as end of ASCII response. The plain response is: {!r}".format( + _ASCII_FOOTER, response + ) + ) + + # Strip ASCII header and footer + response = response[1:-2] + + if len(response) % 2 != 0: + template = ( + "Stripped ASCII frames should have an even number of bytes, but is {} bytes. " + + "The stripped response is: {!r} (plain response: {!r})" + ) + raise InvalidResponseError( + template.format(len(response), response, plainresponse) + ) + + # Convert the ASCII (stripped) response string to RTU-like response string + response = _hexdecode(response) + + # Validate response checksum + if mode == MODE_ASCII: + calculate_checksum = _calculate_lrc_string + number_of_checksum_bytes = NUMBER_OF_LRC_BYTES + else: + calculate_checksum = _calculate_crc_string + number_of_checksum_bytes = NUMBER_OF_CRC_BYTES + + received_checksum = response[-number_of_checksum_bytes:] + response_without_checksum = response[0 : (len(response) - number_of_checksum_bytes)] + calculated_checksum = calculate_checksum(response_without_checksum) + + if received_checksum != calculated_checksum: + template = ( + "Checksum error in {} mode: {!r} instead of {!r} . The response " + + "is: {!r} (plain response: {!r})" + ) + text = template.format( + mode, received_checksum, calculated_checksum, response, plainresponse + ) + raise InvalidResponseError(text) + + # Check slave address + responseaddress = ord(response[_BYTEPOSITION_FOR_SLAVEADDRESS]) + + if responseaddress != slaveaddress: + raise InvalidResponseError( + "Wrong return slave address: {} instead of {}. The response is: {!r}".format( + responseaddress, slaveaddress, response + ) + ) + + # Check if slave indicates error + _check_response_slaveerrorcode(response) + + # Check function code + received_functioncode = ord(response[_BYTEPOSITION_FOR_FUNCTIONCODE]) + if received_functioncode != functioncode: + raise InvalidResponseError( + "Wrong functioncode: {} instead of {}. The response is: {!r}".format( + received_functioncode, functioncode, response + ) + ) + + # Read data payload + first_databyte_number = NUMBER_OF_RESPONSE_STARTBYTES + + if mode == MODE_ASCII: + last_databyte_number = len(response) - NUMBER_OF_LRC_BYTES + else: + last_databyte_number = len(response) - NUMBER_OF_CRC_BYTES + + payload = response[first_databyte_number:last_databyte_number] + return payload + + +# ###################################### # +# Serial communication utility functions # +# ###################################### # + + +def _predict_response_size(mode, functioncode, payload_to_slave): + """Calculate the number of bytes that should be received from the slave. + + Args: + * mode (str): The modbus protcol mode (MODE_RTU or MODE_ASCII) + * functioncode (int): Modbus function code. + * payload_to_slave (str): The raw request that is to be sent to the slave + (not hex encoded string) + + Returns: + The preducted number of bytes (int) in the response. + + Raises: + ValueError, TypeError. + + """ + MIN_PAYLOAD_LENGTH = 4 # For implemented functioncodes here + BYTERANGE_FOR_GIVEN_SIZE = slice(2, 4) # Within the payload + + NUMBER_OF_PAYLOAD_BYTES_IN_WRITE_CONFIRMATION = 4 + NUMBER_OF_PAYLOAD_BYTES_FOR_BYTECOUNTFIELD = 1 + + RTU_TO_ASCII_PAYLOAD_FACTOR = 2 + + NUMBER_OF_RTU_RESPONSE_STARTBYTES = 2 + NUMBER_OF_RTU_RESPONSE_ENDBYTES = 2 + NUMBER_OF_ASCII_RESPONSE_STARTBYTES = 5 + NUMBER_OF_ASCII_RESPONSE_ENDBYTES = 4 + + # Argument validity testing + _check_mode(mode) + _check_functioncode(functioncode, None) + _check_string(payload_to_slave, description="payload", minlength=MIN_PAYLOAD_LENGTH) + + # Calculate payload size + if functioncode in [5, 6, 15, 16]: + response_payload_size = NUMBER_OF_PAYLOAD_BYTES_IN_WRITE_CONFIRMATION + + elif functioncode in [1, 2, 3, 4]: + given_size = _twobyte_string_to_num(payload_to_slave[BYTERANGE_FOR_GIVEN_SIZE]) + if functioncode in [1, 2]: + # Algorithm from MODBUS APPLICATION PROTOCOL SPECIFICATION V1.1b + number_of_inputs = given_size + response_payload_size = ( + NUMBER_OF_PAYLOAD_BYTES_FOR_BYTECOUNTFIELD + + number_of_inputs // 8 + + (1 if number_of_inputs % 8 else 0) + ) + + elif functioncode in [3, 4]: + number_of_registers = given_size + response_payload_size = ( + NUMBER_OF_PAYLOAD_BYTES_FOR_BYTECOUNTFIELD + + number_of_registers * _NUMBER_OF_BYTES_PER_REGISTER + ) + + else: + raise ValueError( + "Wrong functioncode: {}. The payload is: {!r}".format( + functioncode, payload_to_slave + ) + ) + + # Calculate number of bytes to read + if mode == MODE_ASCII: + return ( + NUMBER_OF_ASCII_RESPONSE_STARTBYTES + + response_payload_size * RTU_TO_ASCII_PAYLOAD_FACTOR + + NUMBER_OF_ASCII_RESPONSE_ENDBYTES + ) + else: + return ( + NUMBER_OF_RTU_RESPONSE_STARTBYTES + + response_payload_size + + NUMBER_OF_RTU_RESPONSE_ENDBYTES + ) + + +def _calculate_minimum_silent_period(baudrate): + """Calculate the silent period length between messages. + + It should correspond to the time to send 3.5 characters. + + Args: + baudrate (numerical): The baudrate for the serial port + + Returns: + The number of seconds (float) that should pass between each message on the bus. + + Raises: + ValueError, TypeError. + + """ + # Avoid division by zero + _check_numerical(baudrate, minvalue=1, description="baudrate") + + BITTIMES_PER_CHARACTERTIME = 11 + MINIMUM_SILENT_CHARACTERTIMES = 3.5 + MINIMUM_SILENT_TIME_SECONDS = 0.00175 # See Modbus standard + + bittime = 1 / float(baudrate) + return max( + bittime * BITTIMES_PER_CHARACTERTIME * MINIMUM_SILENT_CHARACTERTIMES, + MINIMUM_SILENT_TIME_SECONDS, + ) + + +# ########################## # +# String and num conversions # +# ########################## # + + +def _num_to_onebyte_string(inputvalue): + """Convert a numerical value to a one-byte string. + + Args: + inputvalue (int): The value to be converted. Should be >=0 and <=255. + + Returns: + A one-byte string created by chr(inputvalue). + + Raises: + TypeError, ValueError + + """ + _check_int(inputvalue, minvalue=0, maxvalue=0xFF) + + return chr(inputvalue) + + +def _num_to_twobyte_string(value, number_of_decimals=0, lsb_first=False, signed=False): + r"""Convert a numerical value to a two-byte string, possibly scaling it. + + Args: + * value (float or int): The numerical value to be converted. + * number_of_decimals (int): Number of decimals, 0 or more, for scaling. + * lsb_first (bool): Whether the least significant byte should be first in + the resulting string. + * signed (bool): Whether negative values should be accepted. + + Returns: + A two-byte string. + + Raises: + TypeError, ValueError. Gives DeprecationWarning instead of ValueError + for some values in Python 2.6. + + Use ``number_of_decimals=1`` to multiply ``value`` by 10 before sending it to + the slave register. Similarly ``number_of_decimals=2`` will multiply ``value`` + by 100 before sending it to the slave register. + + Use the parameter ``signed=True`` if making a bytestring that can hold + negative values. Then negative input will be automatically converted into + upper range data (two's complement). + + The byte order is controlled by the ``lsb_first`` parameter, as seen here: + + ======================= ============= ==================================== + ``lsb_first`` parameter Endianness Description + ======================= ============= ==================================== + False (default) Big-endian Most significant byte is sent first + True Little-endian Least significant byte is sent first + ======================= ============= ==================================== + + For example: + To store for example value=77.0, use ``number_of_decimals = 1`` if the + register will hold it as 770 internally. The value 770 (dec) is 0302 (hex), + where the most significant byte is 03 (hex) and the least significant byte + is 02 (hex). With ``lsb_first = False``, the most significant byte is given first + why the resulting string is ``\x03\x02``, which has the length 2. + + """ + _check_numerical(value, description="inputvalue") + _check_int( + number_of_decimals, + minvalue=0, + maxvalue=_MAX_NUMBER_OF_DECIMALS, + description="number of decimals", + ) + _check_bool(lsb_first, description="lsb_first") + _check_bool(signed, description="signed parameter") + + multiplier = 10 ** number_of_decimals + integer = int(float(value) * multiplier) + + if lsb_first: + formatcode = "<" # Little-endian + else: + formatcode = ">" # Big-endian + if signed: + formatcode += "h" # (Signed) short (2 bytes) + else: + formatcode += "H" # Unsigned short (2 bytes) + + outstring = _pack(formatcode, integer) + assert len(outstring) == 2 + return outstring + + +def _twobyte_string_to_num(bytestring, number_of_decimals=0, signed=False): + r"""Convert a two-byte string to a numerical value, possibly scaling it. + + Args: + * bytestring (str): A string of length 2. + * number_of_decimals (int): The number of decimals. Defaults to 0. + * signed (bol): Whether large positive values should be interpreted as + negative values. + + Returns: + The numerical value (int or float) calculated from the ``bytestring``. + + Raises: + TypeError, ValueError + + Use the parameter ``signed=True`` if converting a bytestring that can hold + negative values. Then upper range data will be automatically converted into + negative return values (two's complement). + + Use ``number_of_decimals=1`` to divide the received data by 10 before returning + the value. Similarly ``number_of_decimals=2`` will divide the received data by + 100 before returning the value. + + The byte order is big-endian, meaning that the most significant byte is sent first. + + For example: + A string ``\x03\x02`` (which has the length 2) corresponds to 0302 (hex) = + 770 (dec). If ``number_of_decimals = 1``, then this is converted to 77.0 (float). + + """ + _check_string(bytestring, minlength=2, maxlength=2, description="bytestring") + _check_int( + number_of_decimals, + minvalue=0, + maxvalue=_MAX_NUMBER_OF_DECIMALS, + description="number of decimals", + ) + _check_bool(signed, description="signed parameter") + + formatcode = ">" # Big-endian + if signed: + formatcode += "h" # (Signed) short (2 bytes) + else: + formatcode += "H" # Unsigned short (2 bytes) + + fullregister = _unpack(formatcode, bytestring) + + if number_of_decimals == 0: + return fullregister + divisor = 10 ** number_of_decimals + return fullregister / float(divisor) + + +def _long_to_bytestring( + value, signed=False, number_of_registers=2, byteorder=BYTEORDER_BIG +): + """Convert a long integer to a bytestring. + + Long integers (32 bits = 4 bytes) are stored in two consecutive 16-bit registers + in the slave. + + Args: + * value (int): The numerical value to be converted. + * signed (bool): Whether large positive values should be interpreted as + negative values. + * number_of_registers (int): Should be 2. For error checking only. + * byteorder (int): How multi-register data should be interpreted. + + Returns: + A bytestring (4 bytes). + + Raises: + TypeError, ValueError + + """ + _check_int(value, description="inputvalue") + _check_bool(signed, description="signed parameter") + _check_int( + number_of_registers, minvalue=2, maxvalue=2, description="number of registers" + ) + _check_int( + byteorder, minvalue=0, maxvalue=_MAX_BYTEORDER_VALUE, description="byteorder" + ) + + if byteorder in [BYTEORDER_BIG, BYTEORDER_BIG_SWAP]: + formatcode = ">" + else: + formatcode = "<" + if signed: + formatcode += "l" # (Signed) long (4 bytes) + else: + formatcode += "L" # Unsigned long (4 bytes) + + outstring = _pack(formatcode, value) + if byteorder in [BYTEORDER_BIG_SWAP, BYTEORDER_LITTLE_SWAP]: + outstring = _swap(outstring) + + assert len(outstring) == 4 + return outstring + + +def _bytestring_to_long( + bytestring, signed=False, number_of_registers=2, byteorder=BYTEORDER_BIG +): + """Convert a bytestring to a long integer. + + Long integers (32 bits = 4 bytes) are stored in two consecutive 16-bit registers + in the slave. + + Args: + * bytestring (str): A string of length 4. + * signed (bol): Whether large positive values should be interpreted as + negative values. + * number_of_registers (int): Should be 2. For error checking only. + * byteorder (int): How multi-register data should be interpreted. + + Returns: + The numerical value (int). + + Raises: + ValueError, TypeError + + """ + _check_string(bytestring, "byte string", minlength=4, maxlength=4) + _check_bool(signed, description="signed parameter") + _check_int( + number_of_registers, minvalue=2, maxvalue=2, description="number of registers" + ) + _check_int( + byteorder, minvalue=0, maxvalue=_MAX_BYTEORDER_VALUE, description="byteorder" + ) + + if byteorder in [BYTEORDER_BIG, BYTEORDER_BIG_SWAP]: + formatcode = ">" + else: + formatcode = "<" + if signed: + formatcode += "l" # (Signed) long (4 bytes) + else: + formatcode += "L" # Unsigned long (4 bytes) + + if byteorder in [BYTEORDER_BIG_SWAP, BYTEORDER_LITTLE_SWAP]: + bytestring = _swap(bytestring) + + return _unpack(formatcode, bytestring) + + +def _float_to_bytestring(value, number_of_registers=2, byteorder=BYTEORDER_BIG): + r"""Convert a numerical value to a bytestring. + + Floats are stored in two or more consecutive 16-bit registers in the slave. The + encoding is according to the standard IEEE 754. + + ====================================== ================= =========== ================= + Type of floating point number in slave Size Registers Range + ====================================== ================= =========== ================= + Single precision (binary32) 32 bits (4 bytes) 2 registers 1.4E-45 to 3.4E38 + Double precision (binary64) 64 bits (8 bytes) 4 registers 5E-324 to 1.8E308 + ====================================== ================= =========== ================= + + A floating point value of 1.0 is encoded (in single precision) as 3f800000 (hex). + This will give a byte string ``'\x3f\x80\x00\x00'`` (big endian). + + Args: + * value (float or int): The numerical value to be converted. + * number_of_registers (int): Can be 2 or 4. + * byteorder (int): How multi-register data should be interpreted. + + Returns: + A bytestring (4 or 8 bytes). + + Raises: + TypeError, ValueError + + """ + _check_numerical(value, description="inputvalue") + _check_int( + number_of_registers, minvalue=2, maxvalue=4, description="number of registers" + ) + _check_int( + byteorder, minvalue=0, maxvalue=_MAX_BYTEORDER_VALUE, description="byteorder" + ) + + if byteorder in [BYTEORDER_BIG, BYTEORDER_BIG_SWAP]: + formatcode = ">" + else: + formatcode = "<" + if number_of_registers == 2: + formatcode += "f" # Float (4 bytes) + lengthtarget = 4 + elif number_of_registers == 4: + formatcode += "d" # Double (8 bytes) + lengthtarget = 8 + else: + raise ValueError( + "Wrong number of registers! Given value is {0!r}".format( + number_of_registers + ) + ) + + outstring = _pack(formatcode, value) + if byteorder in [BYTEORDER_BIG_SWAP, BYTEORDER_LITTLE_SWAP]: + outstring = _swap(outstring) + assert len(outstring) == lengthtarget + return outstring + + +def _bytestring_to_float(bytestring, number_of_registers=2, byteorder=BYTEORDER_BIG): + """Convert a four-byte string to a float. + + Floats are stored in two or more consecutive 16-bit registers in the slave. + + For discussion on precision, number of bits, number of registers, the range, byte order + and on alternative names, see :func:`minimalmodbus._float_to_bytestring`. + + Args: + * bytestring (str): A string of length 4 or 8. + * number_of_registers (int): Can be 2 or 4. + * byteorder (int): How multi-register data should be interpreted. + + Returns: + A float. + + Raises: + TypeError, ValueError + + """ + _check_string(bytestring, minlength=4, maxlength=8, description="bytestring") + _check_int( + number_of_registers, minvalue=2, maxvalue=4, description="number of registers" + ) + _check_int( + byteorder, minvalue=0, maxvalue=_MAX_BYTEORDER_VALUE, description="byteorder" + ) + number_of_bytes = _NUMBER_OF_BYTES_PER_REGISTER * number_of_registers + + if byteorder in [BYTEORDER_BIG, BYTEORDER_BIG_SWAP]: + formatcode = ">" + else: + formatcode = "<" + if number_of_registers == 2: + formatcode += "f" # Float (4 bytes) + elif number_of_registers == 4: + formatcode += "d" # Double (8 bytes) + else: + raise ValueError( + "Wrong number of registers! Given value is {0!r}".format( + number_of_registers + ) + ) + + if len(bytestring) != number_of_bytes: + raise ValueError( + "Wrong length of the byte string! Given value is " + + "{0!r}, and number_of_registers is {1!r}.".format( + bytestring, number_of_registers + ) + ) + + if byteorder in [BYTEORDER_BIG_SWAP, BYTEORDER_LITTLE_SWAP]: + bytestring = _swap(bytestring) + return _unpack(formatcode, bytestring) + + +def _textstring_to_bytestring(inputstring, number_of_registers=16): + """Convert a text string to a bytestring. + + Each 16-bit register in the slave are interpreted as two characters (1 byte = 8 bits). + For example 16 consecutive registers can hold 32 characters (32 bytes). + + Not much of conversion is done, mostly error checking and string padding. + If the inputstring is shorter that the allocated space, it is padded with + spaces in the end. + + Args: + * inputstring (str): The string to be stored in the slave. + Max 2*number_of_registers characters. + * number_of_registers (int): The number of registers allocated for the string. + + Returns: + A bytestring (str). + + Raises: + TypeError, ValueError + + """ + _check_int( + number_of_registers, + minvalue=1, + maxvalue=_MAX_NUMBER_OF_REGISTERS_TO_WRITE, + description="number of registers", + ) + max_characters = _NUMBER_OF_BYTES_PER_REGISTER * number_of_registers + _check_string(inputstring, "input string", minlength=1, maxlength=max_characters) + + bytestring = inputstring.ljust(max_characters) # Pad with space + assert len(bytestring) == max_characters + return bytestring + + +def _bytestring_to_textstring(bytestring, number_of_registers=16): + """Convert a bytestring to a text string. + + Each 16-bit register in the slave are interpreted as two characters (1 byte = 8 bits). + For example 16 consecutive registers can hold 32 characters (32 bytes). + + Not much of conversion is done, mostly error checking. + + Args: + * bytestring (str): The string from the slave. Length = 2*number_of_registers + * number_of_registers (int): The number of registers allocated for the string. + + Returns: + A the text string (str). + + Raises: + TypeError, ValueError + + """ + _check_int( + number_of_registers, + minvalue=1, + maxvalue=_MAX_NUMBER_OF_REGISTERS_TO_READ, + description="number of registers", + ) + max_characters = _NUMBER_OF_BYTES_PER_REGISTER * number_of_registers + _check_string( + bytestring, "byte string", minlength=max_characters, maxlength=max_characters + ) + + textstring = bytestring + return textstring + + +def _valuelist_to_bytestring(valuelist, number_of_registers): + """Convert a list of numerical values to a bytestring. + + Each element is 'unsigned INT16'. + + Args: + * valuelist (list of int): The input list. The elements should be in the + range 0 to 65535. + * number_of_registers (int): The number of registers. For error checking. + Should equal the number of elements in valuelist. + + Returns: + A bytestring (str). Length = 2*number_of_registers + + Raises: + TypeError, ValueError + + """ + MINVALUE = 0 + MAXVALUE = 0xFFFF + + _check_int(number_of_registers, minvalue=1, description="number of registers") + + if not isinstance(valuelist, list): + raise TypeError( + "The valuelist parameter must be a list. Given {0!r}.".format(valuelist) + ) + + for value in valuelist: + _check_int( + value, + minvalue=MINVALUE, + maxvalue=MAXVALUE, + description="elements in the input value list", + ) + + _check_int( + len(valuelist), + minvalue=number_of_registers, + maxvalue=number_of_registers, + description="length of the list", + ) + + number_of_bytes = _NUMBER_OF_BYTES_PER_REGISTER * number_of_registers + + bytestring = "" + for value in valuelist: + bytestring += _num_to_twobyte_string(value, signed=False) + + assert len(bytestring) == number_of_bytes + return bytestring + + +def _bytestring_to_valuelist(bytestring, number_of_registers): + """Convert a bytestring to a list of numerical values. + + The bytestring is interpreted as 'unsigned INT16'. + + Args: + * bytestring (str): The string from the slave. Length = 2*number_of_registers + * number_of_registers (int): The number of registers. For error checking. + + Returns: + A list of integers. + + Raises: + TypeError, ValueError + + """ + _check_int(number_of_registers, minvalue=1, description="number of registers") + number_of_bytes = _NUMBER_OF_BYTES_PER_REGISTER * number_of_registers + _check_string( + bytestring, "byte string", minlength=number_of_bytes, maxlength=number_of_bytes + ) + + values = [] + for i in range(number_of_registers): + offset = _NUMBER_OF_BYTES_PER_REGISTER * i + substring = bytestring[offset : (offset + _NUMBER_OF_BYTES_PER_REGISTER)] + values.append(_twobyte_string_to_num(substring)) + + return values + + +def _now(): + """Return a timestamp for time duration measurements. + + Returns a float, that increases with 1.0 per second. + The starting point is undefined. + """ + if hasattr(time, "monotonic"): + return time.monotonic() + return time.time() + + +def _pack(formatstring, value): + """Pack a value into a bytestring. + + Uses the built-in :mod:`struct` Python module. + + Args: + * formatstring (str): String for the packing. See the :mod:`struct` module + for details. + * value (depends on formatstring): The value to be packed + + Returns: + A bytestring (str). + + Raises: + ValueError + + Note that the :mod:`struct` module produces byte buffers for Python3, + but bytestrings for Python2. This is compensated for automatically. + + """ + _check_string(formatstring, description="formatstring", minlength=1) + + try: + result = struct.pack(formatstring, value) + except Exception: + errortext = ( + "The value to send is probably out of range, as the num-to-bytestring " + ) + errortext += "conversion failed. Value: {0!r} Struct format code is: {1}" + raise ValueError(errortext.format(value, formatstring)) + + if sys.version_info[0] > 2: + return str( + result, encoding="latin1" + ) # Convert types to make it Python3 compatible + return result + + +def _unpack(formatstring, packed): + """Unpack a bytestring into a value. + + Uses the built-in :mod:`struct` Python module. + + Args: + * formatstring (str): String for the packing. See the :mod:`struct` module + for details. + * packed (str): The bytestring to be unpacked. + + Returns: + A value. The type depends on the formatstring. + + Raises: + ValueError + + Note that the :mod:`struct` module wants byte buffers for Python3, + but bytestrings for Python2. This is compensated for automatically. + + """ + _check_string(formatstring, description="formatstring", minlength=1) + _check_string(packed, description="packed string", minlength=1) + + if sys.version_info[0] > 2: + packed = bytes( + packed, encoding="latin1" + ) # Convert types to make it Python3 compatible + + try: + value = struct.unpack(formatstring, packed)[0] + except Exception: + errortext = ( + "The received bytestring is probably wrong, as the bytestring-to-num " + ) + errortext += "conversion failed. Bytestring: {0!r} Struct format code is: {1}" + raise InvalidResponseError(errortext.format(packed, formatstring)) + + return value + + +def _swap(bytestring): + """Swap characters pairwise in a string. + + This corresponds to a "byte swap". + + Args: + * bytestring (str): input. The length should be an even number. + + Return the string with characters swapped. + + """ + length = len(bytestring) + if length % 2: + raise ValueError( + "The length of the bytestring should be even. Given {!r}.".format( + bytestring + ) + ) + templist = list(bytestring) + templist[1:length:2], templist[:length:2] = ( + templist[:length:2], + templist[1:length:2], + ) + return "".join(templist) + + +def _hexencode(bytestring, insert_spaces=False): + r"""Convert a byte string to a hex encoded string. + + For example 'J' will return '4A', and ``'\x04'`` will return '04'. + + Args: + * bytestring (str): Can be for example ``'A\x01B\x45'``. + * insert_spaces (bool): Insert space characters between pair of characters + to increase readability. + + Returns: + A string of twice the length, with characters in the range '0' to '9' and + 'A' to 'F'. The string will be longer if spaces are inserted. + + Raises: + TypeError, ValueError + + """ + _check_string(bytestring, description="byte string") + + separator = "" if not insert_spaces else " " + + # Use plain string formatting instead of binhex.hexlify, + # in order to have it Python 2.x and 3.x compatible + + byte_representions = [] + for char in bytestring: + byte_representions.append("{0:02X}".format(ord(char))) + return separator.join(byte_representions).strip() + + +def _hexdecode(hexstring): + r"""Convert a hex encoded string to a byte string. + + For example '4A' will return 'J', and '04' will return ``'\x04'`` (which has + length 1). + + Args: + * hexstring (str): Can be for example 'A3' or 'A3B4'. Must be of even length. + * Allowed characters are '0' to '9', 'a' to 'f' and 'A' to 'F' (not space). + + Returns: + A string of half the length, with characters corresponding to all 0-255 + values for each byte. + + Raises: + TypeError, ValueError + + """ + # Note: For Python3 the appropriate would be: raise TypeError(new_error_message) from err + # but the Python2 interpreter will indicate SyntaxError. + # Thus we need to live with this warning in Python3: + # 'During handling of the above exception, another exception occurred' + + _check_string(hexstring, description="hexstring") + + if len(hexstring) % 2 != 0: + raise ValueError( + "The input hexstring must be of even length. Given: {!r}".format(hexstring) + ) + + if sys.version_info[0] > 2: + converted_bytes = bytes(hexstring, "latin1") + try: + return str(binascii.unhexlify(converted_bytes), encoding="latin1") + except binascii.Error as err: + new_error_message = "Hexdecode reported an error: {!s}. Input hexstring: {}".format( + err.args[0], hexstring + ) + raise TypeError(new_error_message) + + else: + try: + return hexstring.decode("hex") + except TypeError: + # TODO When Python3 only, show info from first exception + raise TypeError( + "Hexdecode reported an error. Input hexstring: {}".format(hexstring) + ) + + +def _hexlify(bytestring): + """Convert a byte string to a hex encoded string, with spaces for easier reading. + + This is just a facade for _hexencode() with insert_spaces = True. + + See _hexencode() for details. + + """ + return _hexencode(bytestring, insert_spaces=True) + + +def _calculate_number_of_bytes_for_bits(number_of_bits): + """Calculate number of full bytes required to house a number of bits. + + Args: + * number_of_bits (str): Number of bits + + Error checking should have been done before. + + Algorithm from MODBUS APPLICATION PROTOCOL SPECIFICATION V1.1b + + """ + result = number_of_bits // _BITS_PER_BYTE # Integer division in Python2 and 3 + if number_of_bits % _BITS_PER_BYTE: + result += 1 + return result + + +def _bit_to_bytestring(value): + """Create the bit pattern that is used for writing single bits. + + Used for functioncode 5. The same value is sent back in the response + from the slave. + + This is basically a storage of numerical constants. + + Args: + * value (int): can be 0 or 1 + + Returns: + The bit pattern (string). + + Raises: + TypeError, ValueError + + """ + _check_int(value, minvalue=0, maxvalue=1, description="inputvalue") + + if value == 0: + return "\x00\x00" + else: + return "\xff\x00" + + +def _bits_to_bytestring(valuelist): + """Build a bytestring from a list of bits. + + This is used for functioncode 15. + + Args: + * valuelist (list of int): 0 or 1 + + Returns a bytestring. + + """ + if not isinstance(valuelist, list): + raise TypeError( + "The input should be a list. " + "Given: {!r}".format(valuelist) + ) + for value in valuelist: + if value not in [0, 1, False, True]: + raise ValueError( + "Wrong value in list of bits. " + "Given: {!r}".format(value) + ) + + list_position = 0 + outputstring = "" + while list_position < len(valuelist): + sublist = valuelist[list_position : (list_position + _BITS_PER_BYTE)] + + bytevalue = 0 + for bitposition, value in enumerate(sublist): + bytevalue |= value << bitposition + outputstring += chr(bytevalue) + + list_position += _BITS_PER_BYTE + return outputstring + + +def _bytestring_to_bits(bytestring, number_of_bits): + """Parse bits from a bytestring. + + This is used for parsing the bits in response messages for functioncode 1 and 2. + + The first byte in the bytestring contains info on the addressed bit + (in LSB in that byte). Second bit from right contains info on the bit + on the next address. + + Next byte in the bytestring contains data on next 8 bits. Might be padded with + zeros toward MSB. + + Args: + * bytestring (str): input string + * number_of_bits (int): Number of bits to extract + + Returns a list of values (0 or 1). The length of the list is equal to number_of_bits. + + """ + expected_length = _calculate_number_of_bytes_for_bits(number_of_bits) + if len(bytestring) != expected_length: + raise ValueError( + "Wrong length of bytestring. Expected is " + + "{} bytes (for {} bits), actual is {} bytes.".format( + expected_length, number_of_bits, len(bytestring) + ) + ) + total_list = [] + for character in bytestring: + bytevalue = ord(character) + for bitposition in range(_BITS_PER_BYTE): + bitvalue = (bytevalue & (1 << bitposition)) > 0 + total_list.append(int(bitvalue)) + return total_list[:number_of_bits] + + +# ################### # +# Number manipulation # +# ################### # + + +def _twos_complement(x, bits=16): + """Calculate the two's complement of an integer. + + Then also negative values can be represented by an upper range of positive values. + See https://en.wikipedia.org/wiki/Two%27s_complement + + Args: + * x (int): input integer. + * bits (int): number of bits, must be > 0. + + Returns: + An int, that represents the two's complement of the input. + + Example for bits=8: + + ==== ======= + x returns + ==== ======= + 0 0 + 1 1 + 127 127 + -128 128 + -127 129 + -1 255 + ==== ======= + + """ + _check_int(bits, minvalue=0, description="number of bits") + _check_int(x, description="input") + upperlimit = 2 ** (bits - 1) - 1 + lowerlimit = -2 ** (bits - 1) + if x > upperlimit or x < lowerlimit: + raise ValueError( + "The input value is out of range. Given value is " + + "{0}, but allowed range is {1} to {2} when using {3} bits.".format( + x, lowerlimit, upperlimit, bits + ) + ) + + # Calculate two'2 complement + if x >= 0: + return x + return x + 2 ** bits + + +def _from_twos_complement(x, bits=16): + """Calculate the inverse(?) of a two's complement of an integer. + + Args: + * x (int): input integer. + * bits (int): number of bits, must be > 0. + + Returns: + An int, that represents the inverse(?) of two's complement of the input. + + Example for bits=8: + + === ======= + x returns + === ======= + 0 0 + 1 1 + 127 127 + 128 -128 + 129 -127 + 255 -1 + === ======= + + """ + _check_int(bits, minvalue=0, description="number of bits") + + _check_int(x, description="input") + upperlimit = 2 ** (bits) - 1 + lowerlimit = 0 + if x > upperlimit or x < lowerlimit: + raise ValueError( + "The input value is out of range. Given value is " + + "{0}, but allowed range is {1} to {2} when using {3} bits.".format( + x, lowerlimit, upperlimit, bits + ) + ) + + # Calculate inverse(?) of two'2 complement + limit = 2 ** (bits - 1) - 1 + if x <= limit: + return x + return x - 2 ** bits + + +# ################ # +# Bit manipulation # +# ################ # + + +def _set_bit_on(x, bit_num): + """Set bit 'bit_num' to True. + + Args: + * x (int): The value before. + * bit_num (int): The bit number that should be set to True. + + Returns: + The value after setting the bit. This is an integer. + + For example: + For x = 4 (dec) = 0100 (bin), setting bit number 0 results in 0101 (bin) = 5 (dec). + + """ + _check_int(x, minvalue=0, description="input value") + _check_int(bit_num, minvalue=0, description="bitnumber") + + return x | (1 << bit_num) + + +def _check_bit(x, bit_num): + """Check if bit 'bit_num' is set the input integer. + + Args: + * x (int): The input value. + * bit_num (int): The bit number to be checked + + Returns: + True or False + + For example: + For x = 4 (dec) = 0100 (bin), checking bit number 2 results in True, and + checking bit number 3 results in False. + + """ + _check_int(x, minvalue=0, description="input value") + _check_int(bit_num, minvalue=0, description="bitnumber") + + return (x & (1 << bit_num)) > 0 + + +# ######################## # +# Error checking functions # +# ######################## # + + +_CRC16TABLE = ( + 0, + 49345, + 49537, + 320, + 49921, + 960, + 640, + 49729, + 50689, + 1728, + 1920, + 51009, + 1280, + 50625, + 50305, + 1088, + 52225, + 3264, + 3456, + 52545, + 3840, + 53185, + 52865, + 3648, + 2560, + 51905, + 52097, + 2880, + 51457, + 2496, + 2176, + 51265, + 55297, + 6336, + 6528, + 55617, + 6912, + 56257, + 55937, + 6720, + 7680, + 57025, + 57217, + 8000, + 56577, + 7616, + 7296, + 56385, + 5120, + 54465, + 54657, + 5440, + 55041, + 6080, + 5760, + 54849, + 53761, + 4800, + 4992, + 54081, + 4352, + 53697, + 53377, + 4160, + 61441, + 12480, + 12672, + 61761, + 13056, + 62401, + 62081, + 12864, + 13824, + 63169, + 63361, + 14144, + 62721, + 13760, + 13440, + 62529, + 15360, + 64705, + 64897, + 15680, + 65281, + 16320, + 16000, + 65089, + 64001, + 15040, + 15232, + 64321, + 14592, + 63937, + 63617, + 14400, + 10240, + 59585, + 59777, + 10560, + 60161, + 11200, + 10880, + 59969, + 60929, + 11968, + 12160, + 61249, + 11520, + 60865, + 60545, + 11328, + 58369, + 9408, + 9600, + 58689, + 9984, + 59329, + 59009, + 9792, + 8704, + 58049, + 58241, + 9024, + 57601, + 8640, + 8320, + 57409, + 40961, + 24768, + 24960, + 41281, + 25344, + 41921, + 41601, + 25152, + 26112, + 42689, + 42881, + 26432, + 42241, + 26048, + 25728, + 42049, + 27648, + 44225, + 44417, + 27968, + 44801, + 28608, + 28288, + 44609, + 43521, + 27328, + 27520, + 43841, + 26880, + 43457, + 43137, + 26688, + 30720, + 47297, + 47489, + 31040, + 47873, + 31680, + 31360, + 47681, + 48641, + 32448, + 32640, + 48961, + 32000, + 48577, + 48257, + 31808, + 46081, + 29888, + 30080, + 46401, + 30464, + 47041, + 46721, + 30272, + 29184, + 45761, + 45953, + 29504, + 45313, + 29120, + 28800, + 45121, + 20480, + 37057, + 37249, + 20800, + 37633, + 21440, + 21120, + 37441, + 38401, + 22208, + 22400, + 38721, + 21760, + 38337, + 38017, + 21568, + 39937, + 23744, + 23936, + 40257, + 24320, + 40897, + 40577, + 24128, + 23040, + 39617, + 39809, + 23360, + 39169, + 22976, + 22656, + 38977, + 34817, + 18624, + 18816, + 35137, + 19200, + 35777, + 35457, + 19008, + 19968, + 36545, + 36737, + 20288, + 36097, + 19904, + 19584, + 35905, + 17408, + 33985, + 34177, + 17728, + 34561, + 18368, + 18048, + 34369, + 33281, + 17088, + 17280, + 33601, + 16640, + 33217, + 32897, + 16448, +) +r"""CRC-16 lookup table with 256 elements. + +Built with this code:: + + poly=0xA001 + table = [] + for index in range(256): + data = index << 1 + crc = 0 + for _ in range(8, 0, -1): + data >>= 1 + if (data ^ crc) & 0x0001: + crc = (crc >> 1) ^ poly + else: + crc >>= 1 + table.append(crc) + output = '' + for i, m in enumerate(table): + if not i%11: + output += "\n" + output += "{:5.0f}, ".format(m) + print output +""" + + +def _calculate_crc_string(inputstring): + """Calculate CRC-16 for Modbus. + + Args: + inputstring (str): An arbitrary-length message (without the CRC). + + Returns: + A two-byte CRC string, where the least significant byte is first. + + """ + _check_string(inputstring, description="input CRC string") + + # Preload a 16-bit register with ones + register = 0xFFFF + + for char in inputstring: + register = (register >> 8) ^ _CRC16TABLE[(register ^ ord(char)) & 0xFF] + + return _num_to_twobyte_string(register, lsb_first=True) + + +def _calculate_lrc_string(inputstring): + """Calculate LRC for Modbus. + + Args: + inputstring (str): An arbitrary-length message (without the beginning + colon and terminating CRLF). It should already be decoded from hex-string. + + Returns: + A one-byte LRC bytestring (not encoded to hex-string) + + Algorithm from the document 'MODBUS over serial line specification and + implementation guide V1.02'. + + The LRC is calculated as 8 bits (one byte). + + For example a LRC 0110 0001 (bin) = 61 (hex) = 97 (dec) = 'a'. This function will + then return 'a'. + + In Modbus ASCII mode, this should be transmitted using two characters. This + example should be transmitted '61', which is a string of length two. This function + does not handle that conversion for transmission. + + """ + _check_string(inputstring, description="input LRC string") + + register = 0 + for character in inputstring: + register += ord(character) + + lrc = ((register ^ 0xFF) + 1) & 0xFF + + return _num_to_onebyte_string(lrc) + + +def _check_mode(mode): + """Check that the Modbus mode is valie. + + Args: + mode (string): The Modbus mode (MODE_RTU or MODE_ASCII) + + Raises: + TypeError, ValueError + + """ + if not isinstance(mode, str): + raise TypeError("The {0} should be a string. Given: {1!r}".format("mode", mode)) + + if mode not in [MODE_RTU, MODE_ASCII]: + raise ValueError( + "Unreconized Modbus mode given. Must be 'rtu' or 'ascii' but {0!r} was given.".format( + mode + ) + ) + + +def _check_functioncode(functioncode, list_of_allowed_values=None): + """Check that the given functioncode is in the list_of_allowed_values. + + Also verifies that 1 <= function code <= 127. + + Args: + * functioncode (int): The function code + * list_of_allowed_values (list of int): Allowed values. Use *None* to bypass + this part of the checking. + + Raises: + TypeError, ValueError + + """ + FUNCTIONCODE_MIN = 1 + FUNCTIONCODE_MAX = 127 + + _check_int( + functioncode, FUNCTIONCODE_MIN, FUNCTIONCODE_MAX, description="functioncode" + ) + + if list_of_allowed_values is None: + return + + if not isinstance(list_of_allowed_values, list): + raise TypeError( + "The list_of_allowed_values should be a list. Given: {0!r}".format( + list_of_allowed_values + ) + ) + + for value in list_of_allowed_values: + _check_int( + value, + FUNCTIONCODE_MIN, + FUNCTIONCODE_MAX, + description="functioncode inside list_of_allowed_values", + ) + + if functioncode not in list_of_allowed_values: + raise ValueError( + "Wrong function code: {0}, allowed values are {1!r}".format( + functioncode, list_of_allowed_values + ) + ) + + +def _check_slaveaddress(slaveaddress): + """Check that the given slaveaddress is valid. + + Args: + slaveaddress (int): The slave address + + Raises: + TypeError, ValueError + + """ + SLAVEADDRESS_MAX = 255 # Allows usage also of reserved addresses + SLAVEADDRESS_MIN = 0 + + _check_int( + slaveaddress, SLAVEADDRESS_MIN, SLAVEADDRESS_MAX, description="slaveaddress" + ) + + +def _check_registeraddress(registeraddress): + """Check that the given registeraddress is valid. + + Args: + registeraddress (int): The register address + + Raises: + TypeError, ValueError + + """ + REGISTERADDRESS_MAX = 0xFFFF + REGISTERADDRESS_MIN = 0 + + _check_int( + registeraddress, + REGISTERADDRESS_MIN, + REGISTERADDRESS_MAX, + description="registeraddress", + ) + + +def _check_response_payload( + payload, + functioncode, + registeraddress, + value, + number_of_decimals, + number_of_registers, + number_of_bits, + signed, + byteorder, # Not used. For keeping same signature as _parse_payload() + payloadformat, # Not used. For keeping same signature as _parse_payload() +): + if functioncode in [1, 2, 3, 4]: + _check_response_bytecount(payload) + + if functioncode in [5, 6, 15, 16]: + _check_response_registeraddress(payload, registeraddress) + + if functioncode == 5: + _check_response_writedata(payload, _bit_to_bytestring(value)) + elif functioncode == 6: + _check_response_writedata( + payload, _num_to_twobyte_string(value, number_of_decimals, signed=signed) + ) + elif functioncode == 15: + # response number of bits + _check_response_number_of_registers(payload, number_of_bits) + + elif functioncode == 16: + _check_response_number_of_registers(payload, number_of_registers) + + # Response for read bits + if functioncode in [1, 2]: + registerdata = payload[_NUMBER_OF_BYTES_BEFORE_REGISTERDATA:] + expected_number_of_bytes = _calculate_number_of_bytes_for_bits(number_of_bits) + if len(registerdata) != expected_number_of_bytes: + raise InvalidResponseError( + "The data length is wrong for payloadformat BIT/BITS." + + " Expected: {} Actual: {}.".format( + expected_number_of_bytes, len(registerdata) + ) + ) + + # Response for read registers + if functioncode in [3, 4]: + registerdata = payload[_NUMBER_OF_BYTES_BEFORE_REGISTERDATA:] + number_of_register_bytes = number_of_registers * _NUMBER_OF_BYTES_PER_REGISTER + if len(registerdata) != number_of_register_bytes: + raise InvalidResponseError( + "The register data length is wrong. " + + "Registerdata: {!r} bytes. Expected: {!r}.".format( + len(registerdata), number_of_register_bytes + ) + ) + + +def _check_response_slaveerrorcode(response): + """Check if the slave indicates an error. + + Args: + * response (string): Response from the slave + + The response is in RTU format, but the checksum might be one or two bytes + depending on whether it was sent in RTU or ASCII mode. + + Checking of type and length of the response should be done before calling + this functions. + + Raises: + SlaveReportedException or subclass + + """ + NON_ERRORS = [5] + SLAVE_ERRORS = { + 1: IllegalRequestError("Slave reported illegal function"), + 2: IllegalRequestError("Slave reported illegal data address"), + 3: IllegalRequestError("Slave reported illegal data value"), + 4: SlaveReportedException("Slave reported device failure"), + 6: SlaveDeviceBusyError("Slave reported device busy"), + 7: NegativeAcknowledgeError("Slave reported negative acknowledge"), + 8: SlaveReportedException("Slave reported memory parity error"), + 10: SlaveReportedException("Slave reported gateway path unavailable"), + 11: SlaveReportedException( + "Slave reported gateway target device failed to respond" + ), + } + + if len(response) < _BYTEPOSITION_FOR_SLAVE_ERROR_CODE + 1: + return # This check is also done before calling, do not raise exception here. + + received_functioncode = ord(response[_BYTEPOSITION_FOR_FUNCTIONCODE]) + + if _check_bit(received_functioncode, _BITNUMBER_FUNCTIONCODE_ERRORINDICATION): + slave_error_code = ord(response[_BYTEPOSITION_FOR_SLAVE_ERROR_CODE]) + + if slave_error_code in NON_ERRORS: + return + + error = SLAVE_ERRORS.get( + slave_error_code, + SlaveReportedException( + "Slave reported error code " + str(slave_error_code) + ), + ) + raise error + + +def _check_response_bytecount(payload): + """Check that the number of bytes as given in the response is correct. + + The first byte in the payload indicates the length of the payload (first + byte not counted). + + Args: + payload (string): The payload + + Raises: + TypeError, ValueError, InvalidResponseError + + """ + POSITION_FOR_GIVEN_NUMBER = 0 + NUMBER_OF_BYTES_TO_SKIP = 1 + + _check_string( + payload, minlength=1, description="payload", exception_type=InvalidResponseError + ) + + given_number_of_databytes = ord(payload[POSITION_FOR_GIVEN_NUMBER]) + counted_number_of_databytes = len(payload) - NUMBER_OF_BYTES_TO_SKIP + + if given_number_of_databytes != counted_number_of_databytes: + errortemplate = ( + "Wrong given number of bytes in the response: " + + "{0}, but counted is {1} as data payload length is {2}." + + " The data payload is: {3!r}" + ) + errortext = errortemplate.format( + given_number_of_databytes, + counted_number_of_databytes, + len(payload), + payload, + ) + raise InvalidResponseError(errortext) + + +def _check_response_registeraddress(payload, registeraddress): + """Check that the start adress as given in the response is correct. + + The first two bytes in the payload holds the address value. + + Args: + * payload (string): The payload + * registeraddress (int): What the register address actually shoud be + (use decimal numbers, not hex). + + Raises: + TypeError, ValueError, InvalidResponseError + + """ + _check_string( + payload, minlength=2, description="payload", exception_type=InvalidResponseError + ) + _check_registeraddress(registeraddress) + + BYTERANGE_FOR_STARTADDRESS = slice(0, 2) + + bytes_for_startaddress = payload[BYTERANGE_FOR_STARTADDRESS] + received_startaddress = _twobyte_string_to_num(bytes_for_startaddress) + + if received_startaddress != registeraddress: + raise InvalidResponseError( + "Wrong given write start adress: " + + "{0}, but commanded is {1}. The data payload is: {2!r}".format( + received_startaddress, registeraddress, payload + ) + ) + + +def _check_response_number_of_registers(payload, number_of_registers): + """Check that the number of written registers as given in the response is correct. + + The bytes 2 and 3 (zero based counting) in the payload holds the value. + + Args: + * payload (string): The payload + * number_of_registers (int): Number of registers that have been written + + Raises: + TypeError, ValueError, InvalidResponseError + + """ + _check_string( + payload, minlength=4, description="payload", exception_type=InvalidResponseError + ) + _check_int( + number_of_registers, + minvalue=1, + maxvalue=max( + _MAX_NUMBER_OF_REGISTERS_TO_READ, _MAX_NUMBER_OF_REGISTERS_TO_WRITE + ), + description="number of registers", + ) + + BYTERANGE_FOR_NUMBER_OF_REGISTERS = slice(2, 4) + + bytes_for_mumber_of_registers = payload[BYTERANGE_FOR_NUMBER_OF_REGISTERS] + received_number_of_written_registers = _twobyte_string_to_num( + bytes_for_mumber_of_registers + ) + + if received_number_of_written_registers != number_of_registers: + raise InvalidResponseError( + "Wrong number of registers to write in the response: " + + "{0}, but commanded is {1}. The data payload is: {2!r}".format( + received_number_of_written_registers, number_of_registers, payload + ) + ) + + +def _check_response_writedata(payload, writedata): + """Check that the write data as given in the response is correct. + + The bytes 2 and 3 (zero based counting) in the payload holds the write data. + + Args: + * payload (string): The payload + * writedata (string): The data that should have been written. + Length should be 2 bytes. + + Raises: + TypeError, ValueError, InvalidResponseError + + """ + _check_string( + payload, minlength=4, description="payload", exception_type=InvalidResponseError + ) + _check_string(writedata, minlength=2, maxlength=2, description="writedata") + + BYTERANGE_FOR_WRITEDATA = slice(2, 4) + + received_writedata = payload[BYTERANGE_FOR_WRITEDATA] + + if received_writedata != writedata: + raise InvalidResponseError( + "Wrong write data in the response: " + + "{0!r}, but commanded is {1!r}. The data payload is: {2!r}".format( + received_writedata, writedata, payload + ) + ) + + +def _check_string( + inputstring, + description, + minlength=0, + maxlength=None, + force_ascii=False, + exception_type=ValueError, +): + """Check that the given string is valid. + + Args: + * inputstring (string): The string to be checked + * description (string): Used in error messages for the checked inputstring + * minlength (int): Minimum length of the string + * maxlength (int or None): Maximum length of the string + * force_ascii (bool): Enforce that the string is ASCII + * exception_type (Exception): The type of exception to raise for length errors + + The force_ascii argument is valid only for Python3, as all strings are ASCII in Python2. + + Raises: + TypeError, ValueError or the one given by exception_type + + Uses the function :func:`_check_int` internally. + + """ + # Type checking + if not isinstance(description, str): + raise TypeError( + "The description should be a string. Given: {0!r}".format(description) + ) + + if not isinstance(inputstring, str): + raise TypeError( + "The {0} should be a string. Given: {1!r}".format(description, inputstring) + ) + + if not isinstance(maxlength, (int, type(None))): + raise TypeError( + "The maxlength must be an integer or None. Given: {0!r}".format(maxlength) + ) + try: + issubclass(exception_type, Exception) + except TypeError: + raise TypeError( + "The exception_type must be an exception class. " + + "It not even a class. Given: {0!r}".format(type(exception_type)) + ) + if not issubclass(exception_type, Exception): + raise TypeError( + "The exception_type must be an exception class. Given: {0!r}".format( + type(exception_type) + ) + ) + + # Check values + _check_int(minlength, minvalue=0, maxvalue=None, description="minlength") + + if len(inputstring) < minlength: + raise exception_type( + "The {0} is too short: {1}, but minimum value is {2}. Given: {3!r}".format( + description, len(inputstring), minlength, inputstring + ) + ) + + if maxlength is not None: + if maxlength < 0: + raise ValueError( + "The maxlength must be positive. Given: {0}".format(maxlength) + ) + + if maxlength < minlength: + raise ValueError( + "The maxlength must not be smaller than minlength. Given: {0} and {1}".format( + maxlength, minlength + ) + ) + + if len(inputstring) > maxlength: + raise exception_type( + "The {0} is too long: {1}, but maximum value is {2}. Given: {3!r}".format( + description, len(inputstring), maxlength, inputstring + ) + ) + + if force_ascii and sys.version > "3": + try: + inputstring.encode("ascii") + except UnicodeEncodeError: + raise ValueError( + "The {0} must be ASCII. Given: {1!r}".format(description, inputstring) + ) + + +def _check_int(inputvalue, minvalue=None, maxvalue=None, description="inputvalue"): + """Check that the given integer is valid. + + Args: + * inputvalue (int or long): The integer to be checked + * minvalue (int or long, or None): Minimum value of the integer + * maxvalue (int or long, or None): Maximum value of the integer + * description (string): Used in error messages for the checked inputvalue + + Raises: + TypeError, ValueError + + Note: Can not use the function :func:`_check_string`, as that function uses this + function internally. + + """ + if not isinstance(description, str): + raise TypeError( + "The description should be a string. Given: {0!r}".format(description) + ) + + if not isinstance(inputvalue, (int, long)): + raise TypeError( + "The {0} must be an integer. Given: {1!r}".format(description, inputvalue) + ) + + if not isinstance(minvalue, (int, long, type(None))): + raise TypeError( + "The minvalue must be an integer or None. Given: {0!r}".format(minvalue) + ) + + if not isinstance(maxvalue, (int, long, type(None))): + raise TypeError( + "The maxvalue must be an integer or None. Given: {0!r}".format(maxvalue) + ) + + _check_numerical(inputvalue, minvalue, maxvalue, description) + + +def _check_numerical( + inputvalue, minvalue=None, maxvalue=None, description="inputvalue" +): + """Check that the given numerical value is valid. + + Args: + * inputvalue (numerical): The value to be checked. + * minvalue (numerical): Minimum value Use None to skip this part of the test. + * maxvalue (numerical): Maximum value. Use None to skip this part of the test. + * description (string): Used in error messages for the checked inputvalue + + Raises: + TypeError, ValueError + + Note: Can not use the function :func:`_check_string`, as it uses this function + internally. + + """ + # Type checking + if not isinstance(description, str): + raise TypeError( + "The description should be a string. Given: {0!r}".format(description) + ) + + if not isinstance(inputvalue, (int, long, float)): + raise TypeError( + "The {0} must be numerical. Given: {1!r}".format(description, inputvalue) + ) + + if not isinstance(minvalue, (int, float, long, type(None))): + raise TypeError( + "The minvalue must be numeric or None. Given: {0!r}".format(minvalue) + ) + + if not isinstance(maxvalue, (int, float, long, type(None))): + raise TypeError( + "The maxvalue must be numeric or None. Given: {0!r}".format(maxvalue) + ) + + # Consistency checking + if (minvalue is not None) and (maxvalue is not None): + if maxvalue < minvalue: + raise ValueError( + "The maxvalue must not be smaller than minvalue. " + + "Given: {0} and {1}, respectively.".format(maxvalue, minvalue) + ) + + # Value checking + if minvalue is not None: + if inputvalue < minvalue: + raise ValueError( + "The {0} is too small: {1}, but minimum value is {2}.".format( + description, inputvalue, minvalue + ) + ) + + if maxvalue is not None: + if inputvalue > maxvalue: + raise ValueError( + "The {0} is too large: {1}, but maximum value is {2}.".format( + description, inputvalue, maxvalue + ) + ) + + +def _check_bool(inputvalue, description="inputvalue"): + """Check that the given inputvalue is a boolean. + + Args: + * inputvalue (boolean): The value to be checked. + * description (string): Used in error messages for the checked inputvalue. + + Raises: + TypeError, ValueError + + """ + _check_string(description, minlength=1, description="description string") + if not isinstance(inputvalue, bool): + raise TypeError( + "The {0} must be boolean. Given: {1!r}".format(description, inputvalue) + ) + + +##################### +# Development tools # +##################### + + +def _print_out(inputstring): + """Print the inputstring. To make it compatible with Python2 and Python3. + + Args: + inputstring (str): The string that should be printed. + + Raises: + TypeError + + """ + _check_string(inputstring, description="string to print") + + sys.stdout.write(inputstring + "\n") + sys.stdout.flush() + + +# def _interpretRawMessage(inputstr): +# r"""Generate a human readable description of a Modbus bytestring. + +# Args: +# inputstr (str): The bytestring that should be interpreted. + +# Returns: +# A descriptive string. + +# For example, the string ``'\n\x03\x10\x01\x00\x01\xd0q'`` should give something like:: + +# T ODO: update + +# Modbus bytestring decoder +# Input string (length 8 characters): '\n\x03\x10\x01\x00\x01\xd0q' +# Probably modbus RTU mode. +# Slave address: 10 (dec). Function code: 3 (dec). +# Valid message. Extracted payload: '\x10\x01\x00\x01' + +# Pos Character Hex Dec Probable interpretation +# ------------------------------------------------- +# 0: '\n' 0A 10 Slave address +# 1: '\x03' 03 3 Function code +# 2: '\x10' 10 16 Payload +# 3: '\x01' 01 1 Payload +# 4: '\x00' 00 0 Payload +# 5: '\x01' 01 1 Payload +# 6: '\xd0' D0 208 Checksum, CRC LSB +# 7: 'q' 71 113 Checksum, CRC MSB + +# """ +# raise NotImplementedError() +# output = "" +# output += "Modbus bytestring decoder\n" +# output += "Input string (length {} characters): {!r} \n".format( +# len(inputstr), inputstr +# ) + +# # Detect modbus type +# if inputstr.startswith(_ASCII_HEADER) and inputstr.endswith(_ASCII_FOOTER): +# mode = MODE_ASCII +# else: +# mode = MODE_RTU +# output += "Probably Modbus {} mode.\n".format(mode.upper()) + +# # Extract slave address and function code +# try: +# if mode == MODE_ASCII: +# slaveaddress = int(inputstr[1:3]) +# functioncode = int(inputstr[3:5]) +# else: +# slaveaddress = ord(inputstr[0]) +# functioncode = ord(inputstr[1]) +# output += "Slave address: {} (dec). Function code: {} (dec).\n".format( +# slaveaddress, functioncode +# ) +# except Exception: +# output += "\nCould not extract slave address and function code. \n\n" + +# # Check message validity +# try: +# extractedpayload = _extract_payload(inputstr, slaveaddress, mode, functioncode) +# output += "Valid message. Extracted payload: {!r}\n".format(extractedpayload) +# except (ValueError, TypeError) as err: +# output += "\nThe message does not seem to be valid Modbus {}. ".format(mode.upper()) +# output += "Error message: \n{}. \n\n".format(err.messages) +# except NameError as err: +# output += ( +# "\nNo message validity checking. \n\n" +# ) # Slave address or function code not available + +# # Generate table describing the message +# if mode == MODE_RTU: +# output += "\nPos Character Hex Dec Probable interpretation \n" +# output += "------------------------------------------------- \n" +# for i, character in enumerate(inputstr): +# if i == 0: +# description = "Slave address" +# elif i == 1: +# description = "Function code" +# elif i == len(inputstr) - 2: +# description = "Checksum, CRC LSB" +# elif i == len(inputstr) - 1: +# description = "Checksum, CRC MSB" +# else: +# description = "Payload" +# output += "{0:3.0f}: {1!r:<8} {2:02X} {2: 4.0f} {3:<10} \n".format( +# i, character, ord(character), description +# ) + +# elif mode == MODE_ASCII: +# output += "\nPos Character(s) Converted Hex Dec Probable interpretation \n" +# output += "--------------------------------------------------------------- \n" + +# i = 0 +# while i < len(inputstr): + +# if inputstr[i] in [":", "\r", "\n"]: +# if inputstr[i] == ":": +# description = "Start character" +# else: +# description = "Stop character" + +# output += "{0:3.0f}: {1!r:<8} {2} \n".format( +# i, inputstr[i], description +# ) +# i += 1 + +# else: +# if i == 1: +# description = "Slave address" +# elif i == 3: +# description = "Function code" +# elif i == len(inputstr) - 4: +# description = "Checksum (LRC)" +# else: +# description = "Payload" + +# try: +# hexvalue = _hexdecode(inputstr[i:(i + 2)]) +# output += "{0:3.0f}: {1!r:<8} {2!r} {3:02X} {3: 4.0f} {4} \n". +# format( +# i, inputstr[i:(i + 2)], hexvalue, ord(hexvalue), description +# ) +# except Exception: +# output += "{0:3.0f}: {1!r:<8} ? ? ? {2} \n".format( +# i, inputstr[i:(i + 2)], description +# ) +# i += 2 + +# # Generate description for the payload +# output += "\n\n" +# try: +# output += _interpretPayload(functioncode, extractedpayload) +# except Exception: +# output += ( +# "\nCould not interpret the payload. \n\n" +# ) # Payload or function code not available + +# return output + + +# def _interpretPayload(functioncode, payload): +# r"""Generate a human readable description of a Modbus payload. + +# Args: +# * functioncode (int): Function code +# * payload (str): The payload that should be interpreted. It should be a +# byte string. + +# Returns: +# A descriptive string. + +# For example, the payload ``'\x10\x01\x00\x01'`` for functioncode 3 should give +# something like:: + +# T ODO: Update + +# """ +# raise NotImplementedError() +# output = "" +# output += "Modbus payload decoder\n" +# output += "Input payload (length {} characters): {!r} \n".format( +# len(payload), payload +# ) +# output += "Function code: {} (dec).\n".format(functioncode) + +# if len(payload) == 4: +# FourbyteMessageFirstHalfValue = _twobyte_string_to_num(payload[0:2]) +# FourbyteMessageSecondHalfValue = _twobyte_string_to_num(payload[2:4]) + +# return output + + +def _get_diagnostic_string(): + """Generate a diagnostic string, showing the module version, the platform etc. + + Returns: + A descriptive string. + + """ + text = "\n## Diagnostic output from minimalmodbus ## \n\n" + text += "Minimalmodbus version: " + __version__ + "\n" + text += "Minimalmodbus status: " + __status__ + "\n" + text += "File name (with relative path): " + __file__ + "\n" + text += "Full file path: " + os.path.abspath(__file__) + "\n\n" + text += "pySerial version: " + serial.VERSION + "\n" + text += "pySerial full file path: " + os.path.abspath(serial.__file__) + "\n\n" + text += "Platform: " + sys.platform + "\n" + text += "Filesystem encoding: " + repr(sys.getfilesystemencoding()) + "\n" + text += "Byteorder: " + sys.byteorder + "\n" + text += "Python version: " + sys.version + "\n" + text += "Python version info: " + repr(sys.version_info) + "\n" + text += "Python flags: " + repr(sys.flags) + "\n" + text += "Python argv: " + repr(sys.argv) + "\n" + text += "Python prefix: " + repr(sys.prefix) + "\n" + text += "Python exec prefix: " + repr(sys.exec_prefix) + "\n" + text += "Python executable: " + repr(sys.executable) + "\n" + try: + text += "Long info: " + repr(sys.long_info) + "\n" + except Exception: + text += "Long info: (none)\n" # For Python3 compatibility + try: + text += "Float repr style: " + repr(sys.float_repr_style) + "\n\n" + except Exception: + text += "Float repr style: (none) \n\n" # For Python 2.6 compatibility + text += "Variable __name__: " + __name__ + "\n" + text += "Current directory: " + os.getcwd() + "\n\n" + text += "Python path: \n" + text += "\n".join(sys.path) + "\n" + text += "\n## End of diagnostic output ## \n" + return text + + +# For backward compatibility +_getDiagnosticString = _get_diagnostic_string diff --git a/minimalmodbus.pyc b/minimalmodbus.pyc new file mode 100644 index 0000000..0edfbb4 Binary files /dev/null and b/minimalmodbus.pyc differ diff --git a/pycomm/__init__.py b/pycomm/__init__.py new file mode 100644 index 0000000..8c1f233 --- /dev/null +++ b/pycomm/__init__.py @@ -0,0 +1 @@ +__author__ = 'agostino' diff --git a/pycomm/__init__.pyc b/pycomm/__init__.pyc new file mode 100644 index 0000000..330a6c3 Binary files /dev/null and b/pycomm/__init__.pyc differ diff --git a/pycomm/ab_comm/__init__.py b/pycomm/ab_comm/__init__.py new file mode 100644 index 0000000..28c38a3 --- /dev/null +++ b/pycomm/ab_comm/__init__.py @@ -0,0 +1,2 @@ +__author__ = 'agostino' +import logging diff --git a/pycomm/ab_comm/__init__.pyc b/pycomm/ab_comm/__init__.pyc new file mode 100644 index 0000000..1a3f8e5 Binary files /dev/null and b/pycomm/ab_comm/__init__.pyc differ diff --git a/pycomm/ab_comm/clx.py b/pycomm/ab_comm/clx.py new file mode 100644 index 0000000..2f9f02c --- /dev/null +++ b/pycomm/ab_comm/clx.py @@ -0,0 +1,912 @@ +# -*- coding: utf-8 -*- +# +# clx.py - Ethernet/IP Client for Rockwell PLCs +# +# +# Copyright (c) 2014 Agostino Ruscito +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# +from pycomm.cip.cip_base import * +import logging +try: # Python 2.7+ + from logging import NullHandler +except ImportError: + class NullHandler(logging.Handler): + def emit(self, record): + pass + +logger = logging.getLogger(__name__) +logger.addHandler(NullHandler()) + +string_sizes = [82, 12, 16, 20, 40, 8] + + +class Driver(Base): + """ + This Ethernet/IP client is based on Rockwell specification. Please refer to the link below for details. + + http://literature.rockwellautomation.com/idc/groups/literature/documents/pm/1756-pm020_-en-p.pdf + + The following services have been implemented: + - Read Tag Service (0x4c) + - Read Tag Fragment Service (0x52) + - Write Tag Service (0x4d) + - Write Tag Fragment Service (0x53) + - Multiple Service Packet (0x0a) + + The client has been successfully tested with the following PLCs: + - CompactLogix 5330ERM + - CompactLogix 5370 + - ControlLogix 5572 and 1756-EN2T Module + +""" + + def __init__(self): + super(Driver, self).__init__() + + self._buffer = {} + self._get_template_in_progress = False + self.__version__ = '0.2' + + def get_last_tag_read(self): + """ Return the last tag read by a multi request read + + :return: A tuple (tag name, value, type) + """ + return self._last_tag_read + + def get_last_tag_write(self): + """ Return the last tag write by a multi request write + + :return: A tuple (tag name, 'GOOD') if the write was successful otherwise (tag name, 'BAD') + """ + return self._last_tag_write + + def _parse_instance_attribute_list(self, start_tag_ptr, status): + """ extract the tags list from the message received + + :param start_tag_ptr: The point in the message string where the tag list begin + :param status: The status of the message receives + """ + tags_returned = self._reply[start_tag_ptr:] + tags_returned_length = len(tags_returned) + idx = 0 + instance = 0 + count = 0 + try: + while idx < tags_returned_length: + instance = unpack_dint(tags_returned[idx:idx+4]) + idx += 4 + tag_length = unpack_uint(tags_returned[idx:idx+2]) + idx += 2 + tag_name = tags_returned[idx:idx+tag_length] + idx += tag_length + symbol_type = unpack_uint(tags_returned[idx:idx+2]) + idx += 2 + count += 1 + self._tag_list.append({'instance_id': instance, + 'tag_name': tag_name, + 'symbol_type': symbol_type}) + except Exception as e: + raise DataError(e) + + if status == SUCCESS: + self._last_instance = -1 + elif status == 0x06: + self._last_instance = instance + 1 + else: + self._status = (1, 'unknown status during _parse_tag_list') + self._last_instance = -1 + + def _parse_structure_makeup_attributes(self, start_tag_ptr, status): + """ extract the tags list from the message received + + :param start_tag_ptr: The point in the message string where the tag list begin + :param status: The status of the message receives + """ + self._buffer = {} + + if status != SUCCESS: + self._buffer['Error'] = status + return + + attribute = self._reply[start_tag_ptr:] + idx = 4 + try: + if unpack_uint(attribute[idx:idx + 2]) == SUCCESS: + idx += 2 + self._buffer['object_definition_size'] = unpack_dint(attribute[idx:idx + 4]) + else: + self._buffer['Error'] = 'object_definition Error' + return + + idx += 6 + if unpack_uint(attribute[idx:idx + 2]) == SUCCESS: + idx += 2 + self._buffer['structure_size'] = unpack_dint(attribute[idx:idx + 4]) + else: + self._buffer['Error'] = 'structure Error' + return + + idx += 6 + if unpack_uint(attribute[idx:idx + 2]) == SUCCESS: + idx += 2 + self._buffer['member_count'] = unpack_uint(attribute[idx:idx + 2]) + else: + self._buffer['Error'] = 'member_count Error' + return + + idx += 4 + if unpack_uint(attribute[idx:idx + 2]) == SUCCESS: + idx += 2 + self._buffer['structure_handle'] = unpack_uint(attribute[idx:idx + 2]) + else: + self._buffer['Error'] = 'structure_handle Error' + return + + return self._buffer + + except Exception as e: + raise DataError(e) + + def _parse_template(self, start_tag_ptr, status): + """ extract the tags list from the message received + + :param start_tag_ptr: The point in the message string where the tag list begin + :param status: The status of the message receives + """ + tags_returned = self._reply[start_tag_ptr:] + bytes_received = len(tags_returned) + + self._buffer += tags_returned + + if status == SUCCESS: + self._get_template_in_progress = False + + elif status == 0x06: + self._byte_offset += bytes_received + else: + self._status = (1, 'unknown status {0} during _parse_template'.format(status)) + logger.warning(self._status) + self._last_instance = -1 + + def _parse_fragment(self, start_ptr, status): + """ parse the fragment returned by a fragment service. + + :param start_ptr: Where the fragment start within the replay + :param status: status field used to decide if keep parsing or stop + """ + + try: + data_type = unpack_uint(self._reply[start_ptr:start_ptr+2]) + fragment_returned = self._reply[start_ptr+2:] + except Exception as e: + raise DataError(e) + + fragment_returned_length = len(fragment_returned) + idx = 0 + + while idx < fragment_returned_length: + try: + typ = I_DATA_TYPE[data_type] + if self._output_raw: + value = fragment_returned[idx:idx+DATA_FUNCTION_SIZE[typ]] + else: + value = UNPACK_DATA_FUNCTION[typ](fragment_returned[idx:idx+DATA_FUNCTION_SIZE[typ]]) + idx += DATA_FUNCTION_SIZE[typ] + except Exception as e: + raise DataError(e) + if self._output_raw: + self._tag_list += value + else: + self._tag_list.append((self._last_position, value)) + self._last_position += 1 + + if status == SUCCESS: + self._byte_offset = -1 + elif status == 0x06: + self._byte_offset += fragment_returned_length + else: + self._status = (2, '{0}: {1}'.format(SERVICE_STATUS[status], get_extended_status(self._reply, 48))) + logger.warning(self._status) + self._byte_offset = -1 + + def _parse_multiple_request_read(self, tags): + """ parse the message received from a multi request read: + + For each tag parsed, the information extracted includes the tag name, the value read and the data type. + Those information are appended to the tag list as tuple + + :return: the tag list + """ + offset = 50 + position = 50 + try: + number_of_service_replies = unpack_uint(self._reply[offset:offset+2]) + tag_list = [] + for index in range(number_of_service_replies): + position += 2 + start = offset + unpack_uint(self._reply[position:position+2]) + general_status = unpack_usint(self._reply[start+2:start+3]) + + if general_status == 0: + data_type = unpack_uint(self._reply[start+4:start+6]) + value_begin = start + 6 + value_end = value_begin + DATA_FUNCTION_SIZE[I_DATA_TYPE[data_type]] + value = self._reply[value_begin:value_end] + self._last_tag_read = (tags[index], UNPACK_DATA_FUNCTION[I_DATA_TYPE[data_type]](value), + I_DATA_TYPE[data_type]) + else: + self._last_tag_read = (tags[index], None, None) + + tag_list.append(self._last_tag_read) + + return tag_list + except Exception as e: + raise DataError(e) + + def _parse_multiple_request_write(self, tags): + """ parse the message received from a multi request writ: + + For each tag parsed, the information extracted includes the tag name and the status of the writing. + Those information are appended to the tag list as tuple + + :return: the tag list + """ + offset = 50 + position = 50 + try: + number_of_service_replies = unpack_uint(self._reply[offset:offset+2]) + tag_list = [] + for index in range(number_of_service_replies): + position += 2 + start = offset + unpack_uint(self._reply[position:position+2]) + general_status = unpack_usint(self._reply[start+2:start+3]) + + if general_status == 0: + self._last_tag_write = (tags[index] + ('GOOD',)) + else: + self._last_tag_write = (tags[index] + ('BAD',)) + + tag_list.append(self._last_tag_write) + return tag_list + except Exception as e: + raise DataError(e) + + def _check_reply(self): + """ check the replayed message for error + + """ + self._more_packets_available = False + try: + if self._reply is None: + self._status = (3, '%s without reply' % REPLAY_INFO[unpack_dint(self._message[:2])]) + return False + # Get the type of command + typ = unpack_uint(self._reply[:2]) + + # Encapsulation status check + if unpack_dint(self._reply[8:12]) != SUCCESS: + self._status = (3, "{0} reply status:{1}".format(REPLAY_INFO[typ], + SERVICE_STATUS[unpack_dint(self._reply[8:12])])) + return False + + # Command Specific Status check + if typ == unpack_uint(ENCAPSULATION_COMMAND["send_rr_data"]): + status = unpack_usint(self._reply[42:43]) + if status != SUCCESS: + self._status = (3, "send_rr_data reply:{0} - Extend status:{1}".format( + SERVICE_STATUS[status], get_extended_status(self._reply, 42))) + return False + else: + return True + elif typ == unpack_uint(ENCAPSULATION_COMMAND["send_unit_data"]): + status = unpack_usint(self._reply[48:49]) + if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Read Tag Fragmented"]: + self._parse_fragment(50, status) + return True + if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Get Instance Attributes List"]: + self._parse_instance_attribute_list(50, status) + return True + if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Get Attributes"]: + self._parse_structure_makeup_attributes(50, status) + return True + if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Read Template"] and \ + self._get_template_in_progress: + self._parse_template(50, status) + return True + if status == 0x06: + self._status = (3, "Insufficient Packet Space") + self._more_packets_available = True + elif status != SUCCESS: + self._status = (3, "send_unit_data reply:{0} - Extend status:{1}".format( + SERVICE_STATUS[status], get_extended_status(self._reply, 48))) + logger.warning(self._status) + return False + else: + return True + + return True + except Exception as e: + raise DataError(e) + + def read_tag(self, tag): + """ read tag from a connected plc + + Possible combination can be passed to this method: + - ('Counts') a single tag name + - (['ControlWord']) a list with one tag or many + - (['parts', 'ControlWord', 'Counts']) + + At the moment there is not a strong validation for the argument passed. The user should verify + the correctness of the format passed. + + :return: None is returned in case of error otherwise the tag list is returned + """ + self.clear() + multi_requests = False + if isinstance(tag, list): + multi_requests = True + + if not self._target_is_connected: + if not self.forward_open(): + self._status = (6, "Target did not connected. read_tag will not be executed.") + logger.warning(self._status) + raise DataError("Target did not connected. read_tag will not be executed.") + + if multi_requests: + rp_list = [] + for t in tag: + rp = create_tag_rp(t, multi_requests=True) + if rp is None: + self._status = (6, "Cannot create tag {0} request packet. read_tag will not be executed.".format(tag)) + raise DataError("Cannot create tag {0} request packet. read_tag will not be executed.".format(tag)) + else: + rp_list.append(chr(TAG_SERVICES_REQUEST['Read Tag']) + rp + pack_uint(1)) + message_request = build_multiple_service(rp_list, Base._get_sequence()) + + else: + rp = create_tag_rp(tag) + if rp is None: + self._status = (6, "Cannot create tag {0} request packet. read_tag will not be executed.".format(tag)) + return None + else: + # Creating the Message Request Packet + message_request = [ + pack_uint(Base._get_sequence()), + chr(TAG_SERVICES_REQUEST['Read Tag']), # the Request Service + chr(len(rp) / 2), # the Request Path Size length in word + rp, # the request path + pack_uint(1) + ] + + if self.send_unit_data( + build_common_packet_format( + DATA_ITEM['Connected'], + ''.join(message_request), + ADDRESS_ITEM['Connection Based'], + addr_data=self._target_cid, + )) is None: + raise DataError("send_unit_data returned not valid data") + + if multi_requests: + return self._parse_multiple_request_read(tag) + else: + # Get the data type + if self._status[0] == SUCCESS: + data_type = unpack_uint(self._reply[50:52]) + try: + return UNPACK_DATA_FUNCTION[I_DATA_TYPE[data_type]](self._reply[52:]), I_DATA_TYPE[data_type] + except Exception as e: + raise DataError(e) + else: + return None + + def read_array(self, tag, counts, raw=False): + """ read array of atomic data type from a connected plc + + At the moment there is not a strong validation for the argument passed. The user should verify + the correctness of the format passed. + + :param tag: the name of the tag to read + :param counts: the number of element to read + :param raw: the value should output as raw-value (hex) + :return: None is returned in case of error otherwise the tag list is returned + """ + self.clear() + if not self._target_is_connected: + if not self.forward_open(): + self._status = (7, "Target did not connected. read_tag will not be executed.") + logger.warning(self._status) + raise DataError("Target did not connected. read_tag will not be executed.") + + self._byte_offset = 0 + self._last_position = 0 + self._output_raw = raw + + if self._output_raw: + self._tag_list = '' + else: + self._tag_list = [] + while self._byte_offset != -1: + rp = create_tag_rp(tag) + if rp is None: + self._status = (7, "Cannot create tag {0} request packet. read_tag will not be executed.".format(tag)) + return None + else: + # Creating the Message Request Packet + message_request = [ + pack_uint(Base._get_sequence()), + chr(TAG_SERVICES_REQUEST["Read Tag Fragmented"]), # the Request Service + chr(len(rp) / 2), # the Request Path Size length in word + rp, # the request path + pack_uint(counts), + pack_dint(self._byte_offset) + ] + + if self.send_unit_data( + build_common_packet_format( + DATA_ITEM['Connected'], + ''.join(message_request), + ADDRESS_ITEM['Connection Based'], + addr_data=self._target_cid, + )) is None: + raise DataError("send_unit_data returned not valid data") + + return self._tag_list + + def write_tag(self, tag, value=None, typ=None): + """ write tag/tags from a connected plc + + Possible combination can be passed to this method: + - ('tag name', Value, data type) as single parameters or inside a tuple + - ([('tag name', Value, data type), ('tag name2', Value, data type)]) as array of tuples + + At the moment there is not a strong validation for the argument passed. The user should verify + the correctness of the format passed. + + The type accepted are: + - BOOL + - SINT + - INT' + - DINT + - REAL + - LINT + - BYTE + - WORD + - DWORD + - LWORD + + :param tag: tag name, or an array of tuple containing (tag name, value, data type) + :param value: the value to write or none if tag is an array of tuple or a tuple + :param typ: the type of the tag to write or none if tag is an array of tuple or a tuple + :return: None is returned in case of error otherwise the tag list is returned + """ + self.clear() # cleanup error string + multi_requests = False + if isinstance(tag, list): + multi_requests = True + + if not self._target_is_connected: + if not self.forward_open(): + self._status = (8, "Target did not connected. write_tag will not be executed.") + logger.warning(self._status) + raise DataError("Target did not connected. write_tag will not be executed.") + + if multi_requests: + rp_list = [] + tag_to_remove = [] + idx = 0 + for name, value, typ in tag: + # Create the request path to wrap the tag name + rp = create_tag_rp(name, multi_requests=True) + if rp is None: + self._status = (8, "Cannot create tag{0} req. packet. write_tag will not be executed".format(tag)) + return None + else: + try: # Trying to add the rp to the request path list + val = PACK_DATA_FUNCTION[typ](value) + rp_list.append( + chr(TAG_SERVICES_REQUEST['Write Tag']) + + rp + + pack_uint(S_DATA_TYPE[typ]) + + pack_uint(1) + + val + ) + idx += 1 + except (LookupError, struct.error) as e: + self._status = (8, "Tag:{0} type:{1} removed from write list. Error:{2}.".format(name, typ, e)) + + # The tag in idx position need to be removed from the rp list because has some kind of error + tag_to_remove.append(idx) + + # Remove the tags that have not been inserted in the request path list + for position in tag_to_remove: + del tag[position] + # Create the message request + message_request = build_multiple_service(rp_list, Base._get_sequence()) + + else: + if isinstance(tag, tuple): + name, value, typ = tag + else: + name = tag + + rp = create_tag_rp(name) + if rp is None: + self._status = (8, "Cannot create tag {0} request packet. write_tag will not be executed.".format(tag)) + logger.warning(self._status) + return None + else: + # Creating the Message Request Packet + message_request = [ + pack_uint(Base._get_sequence()), + chr(TAG_SERVICES_REQUEST["Write Tag"]), # the Request Service + chr(len(rp) / 2), # the Request Path Size length in word + rp, # the request path + pack_uint(S_DATA_TYPE[typ]), # data type + pack_uint(1), # Add the number of tag to write + PACK_DATA_FUNCTION[typ](value) + ] + + ret_val = self.send_unit_data( + build_common_packet_format( + DATA_ITEM['Connected'], + ''.join(message_request), + ADDRESS_ITEM['Connection Based'], + addr_data=self._target_cid, + ) + ) + + if multi_requests: + return self._parse_multiple_request_write(tag) + else: + if ret_val is None: + raise DataError("send_unit_data returned not valid data") + return ret_val + + def write_array(self, tag, values, data_type, raw=False): + """ write array of atomic data type from a connected plc + At the moment there is not a strong validation for the argument passed. The user should verify + the correctness of the format passed. + :param tag: the name of the tag to read + :param data_type: the type of tag to write + :param values: the array of values to write, if raw: the frame with bytes + :param raw: indicates that the values are given as raw values (hex) + """ + self.clear() + if not isinstance(values, list): + self._status = (9, "A list of tags must be passed to write_array.") + logger.warning(self._status) + raise DataError("A list of tags must be passed to write_array.") + + if not self._target_is_connected: + if not self.forward_open(): + self._status = (9, "Target did not connected. write_array will not be executed.") + logger.warning(self._status) + raise DataError("Target did not connected. write_array will not be executed.") + + array_of_values = "" + byte_size = 0 + byte_offset = 0 + + for i, value in enumerate(values): + if raw: + array_of_values += value + else: + array_of_values += PACK_DATA_FUNCTION[data_type](value) + byte_size += DATA_FUNCTION_SIZE[data_type] + + if byte_size >= 450 or i == len(values)-1: + # create the message and send the fragment + rp = create_tag_rp(tag) + if rp is None: + self._status = (9, "Cannot create tag {0} request packet. \ + write_array will not be executed.".format(tag)) + return None + else: + # Creating the Message Request Packet + message_request = [ + pack_uint(Base._get_sequence()), + chr(TAG_SERVICES_REQUEST["Write Tag Fragmented"]), # the Request Service + chr(len(rp) / 2), # the Request Path Size length in word + rp, # the request path + pack_uint(S_DATA_TYPE[data_type]), # Data type to write + pack_uint(len(values)), # Number of elements to write + pack_dint(byte_offset), + array_of_values # Fragment of elements to write + ] + byte_offset += byte_size + + if self.send_unit_data( + build_common_packet_format( + DATA_ITEM['Connected'], + ''.join(message_request), + ADDRESS_ITEM['Connection Based'], + addr_data=self._target_cid, + )) is None: + raise DataError("send_unit_data returned not valid data") + array_of_values = "" + byte_size = 0 + + def _get_instance_attribute_list_service(self): + """ Step 1: Finding user-created controller scope tags in a Logix5000 controller + + This service returns instance IDs for each created instance of the symbol class, along with a list + of the attribute data associated with the requested attribute + """ + try: + if not self._target_is_connected: + if not self.forward_open(): + self._status = (10, "Target did not connected. get_tag_list will not be executed.") + logger.warning(self._status) + raise DataError("Target did not connected. get_tag_list will not be executed.") + + self._last_instance = 0 + + self._get_template_in_progress = True + while self._last_instance != -1: + + # Creating the Message Request Packet + + message_request = [ + pack_uint(Base._get_sequence()), + chr(TAG_SERVICES_REQUEST['Get Instance Attributes List']), # STEP 1 + # the Request Path Size length in word + chr(3), + # Request Path ( 20 6B 25 00 Instance ) + CLASS_ID["8-bit"], # Class id = 20 from spec 0x20 + CLASS_CODE["Symbol Object"], # Logical segment: Symbolic Object 0x6B + INSTANCE_ID["16-bit"], # Instance Segment: 16 Bit instance 0x25 + '\x00', + pack_uint(self._last_instance), # The instance + # Request Data + pack_uint(2), # Number of attributes to retrieve + pack_uint(1), # Attribute 1: Symbol name + pack_uint(2) # Attribute 2: Symbol type + ] + + if self.send_unit_data( + build_common_packet_format( + DATA_ITEM['Connected'], + ''.join(message_request), + ADDRESS_ITEM['Connection Based'], + addr_data=self._target_cid, + )) is None: + raise DataError("send_unit_data returned not valid data") + + self._get_template_in_progress = False + + except Exception as e: + raise DataError(e) + + def _get_structure_makeup(self, instance_id): + """ + get the structure makeup for a specific structure + """ + if not self._target_is_connected: + if not self.forward_open(): + self._status = (10, "Target did not connected. get_tag_list will not be executed.") + logger.warning(self._status) + raise DataError("Target did not connected. get_tag_list will not be executed.") + + message_request = [ + pack_uint(self._get_sequence()), + chr(TAG_SERVICES_REQUEST['Get Attributes']), + chr(3), # Request Path ( 20 6B 25 00 Instance ) + CLASS_ID["8-bit"], # Class id = 20 from spec 0x20 + CLASS_CODE["Template Object"], # Logical segment: Template Object 0x6C + INSTANCE_ID["16-bit"], # Instance Segment: 16 Bit instance 0x25 + '\x00', + pack_uint(instance_id), + pack_uint(4), # Number of attributes + pack_uint(4), # Template Object Definition Size UDINT + pack_uint(5), # Template Structure Size UDINT + pack_uint(2), # Template Member Count UINT + pack_uint(1) # Structure Handle We can use this to read and write UINT + ] + + if self.send_unit_data( + build_common_packet_format(DATA_ITEM['Connected'], + ''.join(message_request), ADDRESS_ITEM['Connection Based'], + addr_data=self._target_cid,)) is None: + raise DataError("send_unit_data returned not valid data") + + return self._buffer + + def _read_template(self, instance_id, object_definition_size): + """ get a list of the tags in the plc + + """ + if not self._target_is_connected: + if not self.forward_open(): + self._status = (10, "Target did not connected. get_tag_list will not be executed.") + logger.warning(self._status) + raise DataError("Target did not connected. get_tag_list will not be executed.") + + self._byte_offset = 0 + self._buffer = "" + self._get_template_in_progress = True + + try: + while self._get_template_in_progress: + + # Creating the Message Request Packet + + message_request = [ + pack_uint(self._get_sequence()), + chr(TAG_SERVICES_REQUEST['Read Template']), + chr(3), # Request Path ( 20 6B 25 00 Instance ) + CLASS_ID["8-bit"], # Class id = 20 from spec 0x20 + CLASS_CODE["Template Object"], # Logical segment: Template Object 0x6C + INSTANCE_ID["16-bit"], # Instance Segment: 16 Bit instance 0x25 + '\x00', + pack_uint(instance_id), + pack_dint(self._byte_offset), # Offset + pack_uint(((object_definition_size * 4)-23) - self._byte_offset) + ] + + if not self.send_unit_data( + build_common_packet_format(DATA_ITEM['Connected'], ''.join(message_request), + ADDRESS_ITEM['Connection Based'], addr_data=self._target_cid,)): + raise DataError("send_unit_data returned not valid data") + + self._get_template_in_progress = False + return self._buffer + + except Exception as e: + raise DataError(e) + + def _isolating_user_tag(self): + try: + lst = self._tag_list + self._tag_list = [] + for tag in lst: + if tag['tag_name'].find(':') != -1 or tag['tag_name'].find('__') != -1: + continue + if tag['symbol_type'] & 0b0001000000000000: + continue + dimension = (tag['symbol_type'] & 0b0110000000000000) >> 13 + + if tag['symbol_type'] & 0b1000000000000000 : + template_instance_id = tag['symbol_type'] & 0b0000111111111111 + tag_type = 'struct' + data_type = 'user-created' + self._tag_list.append({'instance_id': tag['instance_id'], + 'template_instance_id': template_instance_id, + 'tag_name': tag['tag_name'], + 'dim': dimension, + 'tag_type': tag_type, + 'data_type': data_type, + 'template': {}, + 'udt': {}}) + else: + tag_type = 'atomic' + datatype = tag['symbol_type'] & 0b0000000011111111 + data_type = I_DATA_TYPE[datatype] + if datatype == 0xc1: + bit_position = (tag['symbol_type'] & 0b0000011100000000) >> 8 + self._tag_list.append({'instance_id': tag['instance_id'], + 'tag_name': tag['tag_name'], + 'dim': dimension, + 'tag_type': tag_type, + 'data_type': data_type, + 'bit_position' : bit_position}) + else: + self._tag_list.append({'instance_id': tag['instance_id'], + 'tag_name': tag['tag_name'], + 'dim': dimension, + 'tag_type': tag_type, + 'data_type': data_type}) + except Exception as e: + raise DataError(e) + + def _parse_udt_raw(self, tag): + try: + buff = self._read_template(tag['template_instance_id'], tag['template']['object_definition_size']) + member_count = tag['template']['member_count'] + names = buff.split('\00') + lst = [] + + tag['udt']['name'] = 'Not an user defined structure' + for name in names: + if len(name) > 1: + + if name.find(';') != -1: + tag['udt']['name'] = name[:name.find(';')] + elif name.find('ZZZZZZZZZZ') != -1: + continue + elif name.isalpha(): + lst.append(name) + else: + continue + tag['udt']['internal_tags'] = lst + + type_list = [] + + for i in xrange(member_count): + # skip member 1 + + if i != 0: + array_size = unpack_uint(buff[:2]) + try: + data_type = I_DATA_TYPE[unpack_uint(buff[2:4])] + except Exception: + data_type = "None" + + offset = unpack_dint(buff[4:8]) + type_list.append((array_size, data_type, offset)) + + buff = buff[8:] + + tag['udt']['data_type'] = type_list + except Exception as e: + raise DataError(e) + + def get_tag_list(self): + self._tag_list = [] + # Step 1 + self._get_instance_attribute_list_service() + + # Step 2 + self._isolating_user_tag() + + # Step 3 + for tag in self._tag_list: + if tag['tag_type'] == 'struct': + tag['template'] = self._get_structure_makeup(tag['template_instance_id']) + + for idx, tag in enumerate(self._tag_list): + # print (tag) + if tag['tag_type'] == 'struct': + self._parse_udt_raw(tag) + + # Step 4 + + return self._tag_list + + def write_string(self, tag, value, size=82): + """ + Rockwell define different string size: + STRING STRING_12 STRING_16 STRING_20 STRING_40 STRING_8 + by default we assume size 82 (STRING) + """ + if size not in string_sizes: + raise DataError("String size is incorrect") + + data_tag = ".".join((tag, "DATA")) + len_tag = ".".join((tag, "LEN")) + + # create an empty array + data_to_send = [0] * size + for idx, val in enumerate(value): + data_to_send[idx] = ord(val) + + self.write_tag(len_tag, len(value), 'DINT') + self.write_array(data_tag, data_to_send, 'SINT') + + def read_string(self, tag): + data_tag = ".".join((tag, "DATA")) + len_tag = ".".join((tag, "LEN")) + length = self.read_tag(len_tag) + values = self.read_array(data_tag, length[0]) + values = zip(*values)[1] #[val[1] for val in values] + char_array = [chr(ch) for ch in values] + return ''.join(char_array) diff --git a/pycomm/ab_comm/clx.pyc b/pycomm/ab_comm/clx.pyc new file mode 100644 index 0000000..4b8724b Binary files /dev/null and b/pycomm/ab_comm/clx.pyc differ diff --git a/pycomm/ab_comm/slc.py b/pycomm/ab_comm/slc.py new file mode 100644 index 0000000..834cd7c --- /dev/null +++ b/pycomm/ab_comm/slc.py @@ -0,0 +1,574 @@ +# -*- coding: utf-8 -*- +# +# clx.py - Ethernet/IP Client for Rockwell PLCs +# +# +# Copyright (c) 2014 Agostino Ruscito +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# +from pycomm.cip.cip_base import * +import re +import math +#import binascii + +import logging +try: # Python 2.7+ + from logging import NullHandler +except ImportError: + class NullHandler(logging.Handler): + def emit(self, record): + pass + +logger = logging.getLogger(__name__) +logger.addHandler(NullHandler()) + + +def parse_tag(tag): + t = re.search(r"(?P[CT])(?P\d{1,3})" + r"(:)(?P\d{1,3})" + r"(.)(?PACC|PRE|EN|DN|TT|CU|CD|DN|OV|UN|UA)", tag, flags=re.IGNORECASE) + if t: + if (1 <= int(t.group('file_number')) <= 255) \ + and (0 <= int(t.group('element_number')) <= 255): + return True, t.group(0), {'file_type': t.group('file_type').upper(), + 'file_number': t.group('file_number'), + 'element_number': t.group('element_number'), + 'sub_element': PCCC_CT[t.group('sub_element').upper()], + 'read_func': '\xa2', + 'write_func': '\xab', + 'address_field': 3} + + t = re.search(r"(?P[LFBN])(?P\d{1,3})" + r"(:)(?P\d{1,3})" + r"(/(?P\d{1,2}))?", + tag, flags=re.IGNORECASE) + if t: + if t.group('sub_element') is not None: + if (1 <= int(t.group('file_number')) <= 255) \ + and (0 <= int(t.group('element_number')) <= 255) \ + and (0 <= int(t.group('sub_element')) <= 15): + + return True, t.group(0), {'file_type': t.group('file_type').upper(), + 'file_number': t.group('file_number'), + 'element_number': t.group('element_number'), + 'sub_element': t.group('sub_element'), + 'read_func': '\xa2', + 'write_func': '\xab', + 'address_field': 3} + else: + if (1 <= int(t.group('file_number')) <= 255) \ + and (0 <= int(t.group('element_number')) <= 255): + + return True, t.group(0), {'file_type': t.group('file_type').upper(), + 'file_number': t.group('file_number'), + 'element_number': t.group('element_number'), + 'sub_element': t.group('sub_element'), + 'read_func': '\xa2', + 'write_func': '\xab', + 'address_field': 2} + + t = re.search(r"(?P[IO])(:)(?P\d{1,3})" + r"(.)(?P\d{1,3})" + r"(/(?P\d{1,2}))?", tag, flags=re.IGNORECASE) + if t: + if t.group('sub_element') is not None: + if (0 <= int(t.group('file_number')) <= 255) \ + and (0 <= int(t.group('element_number')) <= 255) \ + and (0 <= int(t.group('sub_element')) <= 15): + + return True, t.group(0), {'file_type': t.group('file_type').upper(), + 'file_number': t.group('file_number'), + 'element_number': t.group('element_number'), + 'sub_element': t.group('sub_element'), + 'read_func': '\xa2', + 'write_func': '\xab', + 'address_field': 3} + else: + if (0 <= int(t.group('file_number')) <= 255) \ + and (0 <= int(t.group('element_number')) <= 255): + + return True, t.group(0), {'file_type': t.group('file_type').upper(), + 'file_number': t.group('file_number'), + 'element_number': t.group('element_number'), + 'read_func': '\xa2', + 'write_func': '\xab', + 'address_field': 2} + + t = re.search(r"(?PS)" + r"(:)(?P\d{1,3})" + r"(/(?P\d{1,2}))?", tag, flags=re.IGNORECASE) + if t: + if t.group('sub_element') is not None: + if (0 <= int(t.group('element_number')) <= 255) \ + and (0 <= int(t.group('sub_element')) <= 15): + return True, t.group(0), {'file_type': t.group('file_type').upper(), + 'file_number': '2', + 'element_number': t.group('element_number'), + 'sub_element': t.group('sub_element'), + 'read_func': '\xa2', + 'write_func': '\xab', + 'address_field': 3} + else: + if 0 <= int(t.group('element_number')) <= 255: + return True, t.group(0), {'file_type': t.group('file_type').upper(), + 'file_number': '2', + 'element_number': t.group('element_number'), + 'read_func': '\xa2', + 'write_func': '\xab', + 'address_field': 2} + + t = re.search(r"(?PB)(?P\d{1,3})" + r"(/)(?P\d{1,4})", + tag, flags=re.IGNORECASE) + if t: + if (1 <= int(t.group('file_number')) <= 255) \ + and (0 <= int(t.group('element_number')) <= 4095): + bit_position = int(t.group('element_number')) + element_number = bit_position / 16 + sub_element = bit_position - (element_number * 16) + return True, t.group(0), {'file_type': t.group('file_type').upper(), + 'file_number': t.group('file_number'), + 'element_number': element_number, + 'sub_element': sub_element, + 'read_func': '\xa2', + 'write_func': '\xab', + 'address_field': 3} + + return False, tag + + +class Driver(Base): + """ + SLC/PLC_5 Implementation + """ + def __init__(self): + super(Driver, self).__init__() + + self.__version__ = '0.1' + self._last_sequence = 0 + + def _check_reply(self): + """ + check the replayed message for error + """ + self._more_packets_available = False + try: + if self._reply is None: + self._status = (3, '%s without reply' % REPLAY_INFO[unpack_dint(self._message[:2])]) + return False + # Get the type of command + typ = unpack_uint(self._reply[:2]) + + # Encapsulation status check + if unpack_dint(self._reply[8:12]) != SUCCESS: + self._status = (3, "{0} reply status:{1}".format(REPLAY_INFO[typ], + SERVICE_STATUS[unpack_dint(self._reply[8:12])])) + return False + + # Command Specific Status check + if typ == unpack_uint(ENCAPSULATION_COMMAND["send_rr_data"]): + status = unpack_usint(self._reply[42:43]) + if status != SUCCESS: + self._status = (3, "send_rr_data reply:{0} - Extend status:{1}".format( + SERVICE_STATUS[status], get_extended_status(self._reply, 42))) + return False + else: + return True + + elif typ == unpack_uint(ENCAPSULATION_COMMAND["send_unit_data"]): + status = unpack_usint(self._reply[48:49]) + if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Read Tag Fragmented"]: + self._parse_fragment(50, status) + return True + if unpack_usint(self._reply[46:47]) == I_TAG_SERVICES_REPLY["Get Instance Attributes List"]: + self._parse_tag_list(50, status) + return True + if status == 0x06: + self._status = (3, "Insufficient Packet Space") + self._more_packets_available = True + elif status != SUCCESS: + self._status = (3, "send_unit_data reply:{0} - Extend status:{1}".format( + SERVICE_STATUS[status], get_extended_status(self._reply, 48))) + return False + else: + return True + + return True + except Exception as e: + raise DataError(e) + + def __queue_data_available(self, queue_number): + """ read the queue + + Possible combination can be passed to this method: + print c.read_tag('F8:0', 3) return a list of 3 registers starting from F8:0 + print c.read_tag('F8:0') return one value + + It is possible to read status bit + + :return: None is returned in case of error + """ + + # Creating the Message Request Packet + self._last_sequence = pack_uint(Base._get_sequence()) + + # PCCC_Cmd_Rd_w3_Q2 = [0x0f, 0x00, 0x30, 0x00, 0xa2, 0x6d, 0x00, 0xa5, 0x02, 0x00] + message_request = [ + self._last_sequence, + '\x4b', + '\x02', + CLASS_ID["8-bit"], + PATH["PCCC"], + '\x07', + self.attribs['vid'], + self.attribs['vsn'], + '\x0f', + '\x00', + self._last_sequence[1], + self._last_sequence[0], + '\xa2', # protected typed logical read with three address fields FNC + '\x6d', # Byte size to read = 109 + '\x00', # File Number + '\xa5', # File Type + pack_uint(queue_number) + ] + + if self.send_unit_data( + build_common_packet_format( + DATA_ITEM['Connected'], + ''.join(message_request), + ADDRESS_ITEM['Connection Based'], + addr_data=self._target_cid,)): + + sts = int(unpack_uint(self._reply[2:4])) + if sts == 146: + return True + else: + return False + else: + raise DataError("read_queue [send_unit_data] returned not valid data") + + def __save_record(self, filename): + with open(filename, "a") as csv_file: + logger.debug("SLC __save_record read:{0}".format(self._reply[61:])) + csv_file.write(self._reply[61:]+'\n') + csv_file.close() + + def __get_queue_size(self, queue_number): + """ get queue size + """ + # Creating the Message Request Packet + self._last_sequence = pack_uint(Base._get_sequence()) + + message_request = [ + self._last_sequence, + '\x4b', + '\x02', + CLASS_ID["8-bit"], + PATH["PCCC"], + '\x07', + self.attribs['vid'], + self.attribs['vsn'], + '\x0f', + '\x00', + self._last_sequence[1], + self._last_sequence[0], + # '\x30', + # '\x00', + '\xa1', # FNC to get the queue size + '\x06', # Byte size to read = 06 + '\x00', # File Number + '\xea', # File Type ???? + '\xff', # File Type ???? + pack_uint(queue_number) + ] + + if self.send_unit_data( + build_common_packet_format( + DATA_ITEM['Connected'], + ''.join(message_request), + ADDRESS_ITEM['Connection Based'], + addr_data=self._target_cid,)): + sts = int(unpack_uint(self._reply[65:67])) + logger.debug("SLC __get_queue_size({0}) returned {1}".format(queue_number, sts)) + return sts + else: + raise DataError("read_queue [send_unit_data] returned not valid data") + + def read_queue(self, queue_number, file_name): + """ read the queue + + """ + if not self._target_is_connected: + if not self.forward_open(): + self._status = (5, "Target did not connected. is_queue_available will not be executed.") + logger.warning(self._status) + raise DataError("Target did not connected. is_queue_available will not be executed.") + + if self.__queue_data_available(queue_number): + logger.debug("SLC read_queue: Queue {0} has data".format(queue_number)) + self.__save_record(file_name) + size = self.__get_queue_size(queue_number) + if size > 0: + for i in range(0, size): + if self.__queue_data_available(queue_number): + self.__save_record(file_name) + + logger.debug("SLC read_queue: {0} record extract from queue {1}".format(size, queue_number)) + else: + logger.debug("SLC read_queue: Queue {0} has no data".format(queue_number)) + + def read_tag(self, tag, n=1): + """ read tag from a connected plc + + Possible combination can be passed to this method: + print c.read_tag('F8:0', 3) return a list of 3 registers starting from F8:0 + print c.read_tag('F8:0') return one value + + It is possible to read status bit + + :return: None is returned in case of error + """ + res = parse_tag(tag) + if not res[0]: + self._status = (1000, "Error parsing the tag passed to read_tag({0},{1})".format(tag, n)) + logger.warning(self._status) + raise DataError("Error parsing the tag passed to read_tag({0},{1})".format(tag, n)) + + bit_read = False + bit_position = 0 + sub_element = 0 + if int(res[2]['address_field'] == 3): + bit_read = True + bit_position = int(res[2]['sub_element']) + + if not self._target_is_connected: + if not self.forward_open(): + self._status = (5, "Target did not connected. read_tag will not be executed.") + logger.warning(self._status) + raise DataError("Target did not connected. read_tag will not be executed.") + + data_size = PCCC_DATA_SIZE[res[2]['file_type']] + + # Creating the Message Request Packet + self._last_sequence = pack_uint(Base._get_sequence()) + + message_request = [ + self._last_sequence, + '\x4b', + '\x02', + CLASS_ID["8-bit"], + PATH["PCCC"], + '\x07', + self.attribs['vid'], + self.attribs['vsn'], + '\x0f', + '\x00', + self._last_sequence[1], + self._last_sequence[0], + res[2]['read_func'], + pack_usint(data_size * n), + pack_usint(int(res[2]['file_number'])), + PCCC_DATA_TYPE[res[2]['file_type']], + pack_usint(int(res[2]['element_number'])), + pack_usint(sub_element) + ] + + logger.debug("SLC read_tag({0},{1})".format(tag, n)) + if self.send_unit_data( + build_common_packet_format( + DATA_ITEM['Connected'], + ''.join(message_request), + ADDRESS_ITEM['Connection Based'], + addr_data=self._target_cid,)): + sts = int(unpack_usint(self._reply[58])) + try: + if sts != 0: + sts_txt = PCCC_ERROR_CODE[sts] + self._status = (1000, "Error({0}) returned from read_tag({1},{2})".format(sts_txt, tag, n)) + logger.warning(self._status) + raise DataError("Error({0}) returned from read_tag({1},{2})".format(sts_txt, tag, n)) + + new_value = 61 + if bit_read: + if res[2]['file_type'] == 'T' or res[2]['file_type'] == 'C': + if bit_position == PCCC_CT['PRE']: + return UNPACK_PCCC_DATA_FUNCTION[res[2]['file_type']]( + self._reply[new_value+2:new_value+2+data_size]) + elif bit_position == PCCC_CT['ACC']: + return UNPACK_PCCC_DATA_FUNCTION[res[2]['file_type']]( + self._reply[new_value+4:new_value+4+data_size]) + + tag_value = UNPACK_PCCC_DATA_FUNCTION[res[2]['file_type']]( + self._reply[new_value:new_value+data_size]) + return get_bit(tag_value, bit_position) + + else: + values_list = [] + while len(self._reply[new_value:]) >= data_size: + values_list.append( + UNPACK_PCCC_DATA_FUNCTION[res[2]['file_type']](self._reply[new_value:new_value+data_size]) + ) + new_value = new_value+data_size + + if len(values_list) > 1: + return values_list + else: + return values_list[0] + + except Exception as e: + self._status = (1000, "Error({0}) parsing the data returned from read_tag({1},{2})".format(e, tag, n)) + logger.warning(self._status) + raise DataError("Error({0}) parsing the data returned from read_tag({1},{2})".format(e, tag, n)) + else: + raise DataError("send_unit_data returned not valid data") + + def write_tag(self, tag, value): + """ write tag from a connected plc + + Possible combination can be passed to this method: + c.write_tag('N7:0', [-30, 32767, -32767]) + c.write_tag('N7:0', 21) + c.read_tag('N7:0', 10) + + It is not possible to write status bit + + :return: None is returned in case of error + """ + res = parse_tag(tag) + if not res[0]: + self._status = (1000, "Error parsing the tag passed to read_tag({0},{1})".format(tag, value)) + logger.warning(self._status) + raise DataError("Error parsing the tag passed to read_tag({0},{1})".format(tag, value)) + + if isinstance(value, list) and int(res[2]['address_field'] == 3): + self._status = (1000, "Function's parameters error. read_tag({0},{1})".format(tag, value)) + logger.warning(self._status) + raise DataError("Function's parameters error. read_tag({0},{1})".format(tag, value)) + + if isinstance(value, list) and int(res[2]['address_field'] == 3): + self._status = (1000, "Function's parameters error. read_tag({0},{1})".format(tag, value)) + logger.warning(self._status) + raise DataError("Function's parameters error. read_tag({0},{1})".format(tag, value)) + + bit_field = False + bit_position = 0 + sub_element = 0 + if int(res[2]['address_field'] == 3): + bit_field = True + bit_position = int(res[2]['sub_element']) + values_list = '' + else: + values_list = '\xff\xff' + + multi_requests = False + if isinstance(value, list): + multi_requests = True + + if not self._target_is_connected: + if not self.forward_open(): + self._status = (1000, "Target did not connected. write_tag will not be executed.") + logger.warning(self._status) + raise DataError("Target did not connected. write_tag will not be executed.") + + try: + n = 0 + if multi_requests: + data_size = PCCC_DATA_SIZE[res[2]['file_type']] + for v in value: + values_list += PACK_PCCC_DATA_FUNCTION[res[2]['file_type']](v) + n += 1 + else: + n = 1 + if bit_field: + data_size = 2 + + if (res[2]['file_type'] == 'T' or res[2]['file_type'] == 'C') \ + and (bit_position == PCCC_CT['PRE'] or bit_position == PCCC_CT['ACC']): + sub_element = bit_position + values_list = '\xff\xff' + PACK_PCCC_DATA_FUNCTION[res[2]['file_type']](value) + else: + sub_element = 0 + if value > 0: + values_list = pack_uint(math.pow(2, bit_position)) + pack_uint(math.pow(2, bit_position)) + else: + values_list = pack_uint(math.pow(2, bit_position)) + pack_uint(0) + + else: + values_list += PACK_PCCC_DATA_FUNCTION[res[2]['file_type']](value) + data_size = PCCC_DATA_SIZE[res[2]['file_type']] + + except Exception as e: + self._status = (1000, "Error({0}) packing the values to write to the" + "SLC write_tag({1},{2})".format(e, tag, value)) + logger.warning(self._status) + raise DataError("Error({0}) packing the values to write to the " + "SLC write_tag({1},{2})".format(e, tag, value)) + + data_to_write = values_list + + # Creating the Message Request Packet + self._last_sequence = pack_uint(Base._get_sequence()) + + message_request = [ + self._last_sequence, + '\x4b', + '\x02', + CLASS_ID["8-bit"], + PATH["PCCC"], + '\x07', + self.attribs['vid'], + self.attribs['vsn'], + '\x0f', + '\x00', + self._last_sequence[1], + self._last_sequence[0], + res[2]['write_func'], + pack_usint(data_size * n), + pack_usint(int(res[2]['file_number'])), + PCCC_DATA_TYPE[res[2]['file_type']], + pack_usint(int(res[2]['element_number'])), + pack_usint(sub_element) + ] + + logger.debug("SLC write_tag({0},{1})".format(tag, value)) + if self.send_unit_data( + build_common_packet_format( + DATA_ITEM['Connected'], + ''.join(message_request) + data_to_write, + ADDRESS_ITEM['Connection Based'], + addr_data=self._target_cid,)): + sts = int(unpack_usint(self._reply[58])) + try: + if sts != 0: + sts_txt = PCCC_ERROR_CODE[sts] + self._status = (1000, "Error({0}) returned from SLC write_tag({1},{2})".format(sts_txt, tag, value)) + logger.warning(self._status) + raise DataError("Error({0}) returned from SLC write_tag({1},{2})".format(sts_txt, tag, value)) + + return True + except Exception as e: + self._status = (1000, "Error({0}) parsing the data returned from " + "SLC write_tag({1},{2})".format(e, tag, value)) + logger.warning(self._status) + raise DataError("Error({0}) parsing the data returned from " + "SLC write_tag({1},{2})".format(e, tag, value)) + else: + raise DataError("send_unit_data returned not valid data") diff --git a/pycomm/cip/__init__.py b/pycomm/cip/__init__.py new file mode 100644 index 0000000..8c1f233 --- /dev/null +++ b/pycomm/cip/__init__.py @@ -0,0 +1 @@ +__author__ = 'agostino' diff --git a/pycomm/cip/__init__.pyc b/pycomm/cip/__init__.pyc new file mode 100644 index 0000000..974d04f Binary files /dev/null and b/pycomm/cip/__init__.pyc differ diff --git a/pycomm/cip/cip_base.py b/pycomm/cip/cip_base.py new file mode 100644 index 0000000..81757ae --- /dev/null +++ b/pycomm/cip/cip_base.py @@ -0,0 +1,896 @@ +# -*- coding: utf-8 -*- +# +# cip_base.py - A set of classes methods and structures used to implement Ethernet/IP +# +# +# Copyright (c) 2014 Agostino Ruscito +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# + +import struct +import socket +import random + +from os import getpid +from pycomm.cip.cip_const import * +from pycomm.common import PycommError + + +import logging +try: # Python 2.7+ + from logging import NullHandler +except ImportError: + class NullHandler(logging.Handler): + def emit(self, record): + pass +logger = logging.getLogger(__name__) +logger.addHandler(NullHandler()) + + +class CommError(PycommError): + pass + + +class DataError(PycommError): + pass + + +def pack_sint(n): + return struct.pack('b', n) + + +def pack_usint(n): + return struct.pack('B', n) + + +def pack_int(n): + """pack 16 bit into 2 bytes little endian""" + return struct.pack(' +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# + +ELEMENT_ID = { + "8-bit": '\x28', + "16-bit": '\x29', + "32-bit": '\x2a' +} + +CLASS_ID = { + "8-bit": '\x20', + "16-bit": '\x21', +} + +INSTANCE_ID = { + "8-bit": '\x24', + "16-bit": '\x25' +} + +ATTRIBUTE_ID = { + "8-bit": '\x30', + "16-bit": '\x31' +} + +# Path are combined as: +# CLASS_ID + PATHS +# For example PCCC path is CLASS_ID["8-bit"]+PATH["PCCC"] -> 0x20, 0x67, 0x24, 0x01. +PATH = { + 'Connection Manager': '\x06\x24\x01', + 'Router': '\x02\x24\x01', + 'Backplane Data Type': '\x66\x24\x01', + 'PCCC': '\x67\x24\x01', + 'DHCP Channel A': '\xa6\x24\x01\x01\x2c\x01', + 'DHCP Channel B': '\xa6\x24\x01\x02\x2c\x01' +} + +ENCAPSULATION_COMMAND = { # Volume 2: 2-3.2 Command Field UINT 2 byte + "nop": '\x00\x00', + "list_targets": '\x01\x00', + "list_services": '\x04\x00', + "list_identity": '\x63\x00', + "list_interfaces": '\x64\x00', + "register_session": '\x65\x00', + "unregister_session": '\x66\x00', + "send_rr_data": '\x6F\x00', + "send_unit_data": '\x70\x00' +} + +""" +When a tag is created, an instance of the Symbol Object (Class ID 0x6B) is created +inside the controller. + +When a UDT is created, an instance of the Template object (Class ID 0x6C) is +created to hold information about the structure makeup. +""" +CLASS_CODE = { + "Message Router": '\x02', # Volume 1: 5-1 + "Symbol Object": '\x6b', + "Template Object": '\x6c', + "Connection Manager": '\x06' # Volume 1: 3-5 +} + +CONNECTION_MANAGER_INSTANCE = { + 'Open Request': '\x01', + 'Open Format Rejected': '\x02', + 'Open Resource Rejected': '\x03', + 'Open Other Rejected': '\x04', + 'Close Request': '\x05', + 'Close Format Request': '\x06', + 'Close Other Request': '\x07', + 'Connection Timeout': '\x08' +} + +TAG_SERVICES_REQUEST = { + "Read Tag": 0x4c, + "Read Tag Fragmented": 0x52, + "Write Tag": 0x4d, + "Write Tag Fragmented": 0x53, + "Read Modify Write Tag": 0x4e, + "Multiple Service Packet": 0x0a, + "Get Instance Attributes List": 0x55, + "Get Attributes": 0x03, + "Read Template": 0x4c, +} + +TAG_SERVICES_REPLY = { + 0xcc: "Read Tag", + 0xd2: "Read Tag Fragmented", + 0xcd: "Write Tag", + 0xd3: "Write Tag Fragmented", + 0xce: "Read Modify Write Tag", + 0x8a: "Multiple Service Packet", + 0xd5: "Get Instance Attributes List", + 0x83: "Get Attributes", + 0xcc: "Read Template" +} + + +I_TAG_SERVICES_REPLY = { + "Read Tag": 0xcc, + "Read Tag Fragmented": 0xd2, + "Write Tag": 0xcd, + "Write Tag Fragmented": 0xd3, + "Read Modify Write Tag": 0xce, + "Multiple Service Packet": 0x8a, + "Get Instance Attributes List": 0xd5, + "Get Attributes": 0x83, + "Read Template": 0xcc +} + + +""" +EtherNet/IP Encapsulation Error Codes + +Standard CIP Encapsulation Error returned in the cip message header +""" +STATUS = { + 0x0000: "Success", + 0x0001: "The sender issued an invalid or unsupported encapsulation command", + 0x0002: "Insufficient memory", + 0x0003: "Poorly formed or incorrect data in the data portion", + 0x0064: "An originator used an invalid session handle when sending an encapsulation message to the target", + 0x0065: "The target received a message of invalid length", + 0x0069: "Unsupported Protocol Version" +} + +""" +MSG Error Codes: + +The following error codes have been taken from: + +Rockwell Automation Publication +1756-RM003P-EN-P - December 2014 +""" +SERVICE_STATUS = { + 0x01: "Connection failure (see extended status)", + 0x02: "Insufficient resource", + 0x03: "Invalid value", + 0x04: "IOI syntax error. A syntax error was detected decoding the Request Path (see extended status)", + 0x05: "Destination unknown, class unsupported, instance \nundefined or structure element undefined (see extended status)", + 0x06: "Insufficient Packet Space", + 0x07: "Connection lost", + 0x08: "Service not supported", + 0x09: "Error in data segment or invalid attribute value", + 0x0A: "Attribute list error", + 0x0B: "State already exist", + 0x0C: "Object state conflict", + 0x0D: "Object already exist", + 0x0E: "Attribute not settable", + 0x0F: "Permission denied", + 0x10: "Device state conflict", + 0x11: "Reply data too large", + 0x12: "Fragmentation of a primitive value", + 0x13: "Insufficient command data", + 0x14: "Attribute not supported", + 0x15: "Too much data", + 0x1A: "Bridge request too large", + 0x1B: "Bridge response too large", + 0x1C: "Attribute list shortage", + 0x1D: "Invalid attribute list", + 0x1E: "Request service error", + 0x1F: "Connection related failure (see extended status)", + 0x22: "Invalid reply received", + 0x25: "Key segment error", + 0x26: "Invalid IOI error", + 0x27: "Unexpected attribute in list", + 0x28: "DeviceNet error - invalid member ID", + 0x29: "DeviceNet error - member not settable", + 0xD1: "Module not in run state", + 0xFB: "Message port not supported", + 0xFC: "Message unsupported data type", + 0xFD: "Message uninitialized", + 0xFE: "Message timeout", + 0xff: "General Error (see extended status)" +} + +EXTEND_CODES = { + 0x01: { + 0x0100: "Connection in use", + 0x0103: "Transport not supported", + 0x0106: "Ownership conflict", + 0x0107: "Connection not found", + 0x0108: "Invalid connection type", + 0x0109: "Invalid connection size", + 0x0110: "Module not configured", + 0x0111: "EPR not supported", + 0x0114: "Wrong module", + 0x0115: "Wrong device type", + 0x0116: "Wrong revision", + 0x0118: "Invalid configuration format", + 0x011A: "Application out of connections", + 0x0203: "Connection timeout", + 0x0204: "Unconnected message timeout", + 0x0205: "Unconnected send parameter error", + 0x0206: "Message too large", + 0x0301: "No buffer memory", + 0x0302: "Bandwidth not available", + 0x0303: "No screeners available", + 0x0305: "Signature match", + 0x0311: "Port not available", + 0x0312: "Link address not available", + 0x0315: "Invalid segment type", + 0x0317: "Connection not scheduled" + }, + 0x04: { + 0x0000: "Extended status out of memory", + 0x0001: "Extended status out of instances" + }, + 0x05: { + 0x0000: "Extended status out of memory", + 0x0001: "Extended status out of instances" + }, + 0x1F: { + 0x0203: "Connection timeout" + }, + 0xff: { + 0x7: "Wrong data type", + 0x2001: "Excessive IOI", + 0x2002: "Bad parameter value", + 0x2018: "Semaphore reject", + 0x201B: "Size too small", + 0x201C: "Invalid size", + 0x2100: "Privilege failure", + 0x2101: "Invalid keyswitch position", + 0x2102: "Password invalid", + 0x2103: "No password issued", + 0x2104: "Address out of range", + 0x2105: "Access beyond end of the object", + 0x2106: "Data in use", + 0x2107: "Tag type used n request dose not match the target tag's data type", + 0x2108: "Controller in upload or download mode", + 0x2109: "Attempt to change number of array dimensions", + 0x210A: "Invalid symbol name", + 0x210B: "Symbol does not exist", + 0x210E: "Search failed", + 0x210F: "Task cannot start", + 0x2110: "Unable to write", + 0x2111: "Unable to read", + 0x2112: "Shared routine not editable", + 0x2113: "Controller in faulted mode", + 0x2114: "Run mode inhibited" + + } +} +DATA_ITEM = { + 'Connected': '\xb1\x00', + 'Unconnected': '\xb2\x00' +} + +ADDRESS_ITEM = { + 'Connection Based': '\xa1\x00', + 'Null': '\x00\x00', + 'UCMM': '\x00\x00' +} + +UCMM = { + 'Interface Handle': 0, + 'Item Count': 2, + 'Address Type ID': 0, + 'Address Length': 0, + 'Data Type ID': 0x00b2 +} + +CONNECTION_SIZE = { + 'Backplane': '\x03', # CLX + 'Direct Network': '\x02' +} + +HEADER_SIZE = 24 +EXTENDED_SYMBOL = '\x91' +BOOL_ONE = 0xff +REQUEST_SERVICE = 0 +REQUEST_PATH_SIZE = 1 +REQUEST_PATH = 2 +SUCCESS = 0 +INSUFFICIENT_PACKETS = 6 +OFFSET_MESSAGE_REQUEST = 40 + + +FORWARD_CLOSE = '\x4e' +UNCONNECTED_SEND = '\x52' +FORWARD_OPEN = '\x54' +LARGE_FORWARD_OPEN = '\x5b' +GET_CONNECTION_DATA = '\x56' +SEARCH_CONNECTION_DATA = '\x57' +GET_CONNECTION_OWNER = '\x5a' +MR_SERVICE_SIZE = 2 + +PADDING_BYTE = '\x00' +PRIORITY = '\x0a' +TIMEOUT_TICKS = '\x05' +TIMEOUT_MULTIPLIER = '\x01' +TRANSPORT_CLASS = '\xa3' + +CONNECTION_PARAMETER = { + 'PLC5': 0x4302, + 'SLC500': 0x4302, + 'CNET': 0x4320, + 'DHP': 0x4302, + 'Default': 0x43f8, +} + +""" +Atomic Data Type: + + Bit = Bool + Bit array = DWORD (32-bit boolean aray) + 8-bit integer = SINT +16-bit integer = UINT +32-bit integer = DINT + 32-bit float = REAL +64-bit integer = LINT + +From Rockwell Automation Publication 1756-PM020C-EN-P November 2012: +When reading a BOOL tag, the values returned for 0 and 1 are 0 and 0xff, respectively. +""" + +S_DATA_TYPE = { + 'BOOL': 0xc1, + 'SINT': 0xc2, # Signed 8-bit integer + 'INT': 0xc3, # Signed 16-bit integer + 'DINT': 0xc4, # Signed 32-bit integer + 'LINT': 0xc5, # Signed 64-bit integer + 'USINT': 0xc6, # Unsigned 8-bit integer + 'UINT': 0xc7, # Unsigned 16-bit integer + 'UDINT': 0xc8, # Unsigned 32-bit integer + 'ULINT': 0xc9, # Unsigned 64-bit integer + 'REAL': 0xca, # 32-bit floating point + 'LREAL': 0xcb, # 64-bit floating point + 'STIME': 0xcc, # Synchronous time + 'DATE': 0xcd, + 'TIME_OF_DAY': 0xce, + 'DATE_AND_TIME': 0xcf, + 'STRING': 0xd0, # character string (1 byte per character) + 'BYTE': 0xd1, # byte string 8-bits + 'WORD': 0xd2, # byte string 16-bits + 'DWORD': 0xd3, # byte string 32-bits + 'LWORD': 0xd4, # byte string 64-bits + 'STRING2': 0xd5, # character string (2 byte per character) + 'FTIME': 0xd6, # Duration high resolution + 'LTIME': 0xd7, # Duration long + 'ITIME': 0xd8, # Duration short + 'STRINGN': 0xd9, # character string (n byte per character) + 'SHORT_STRING': 0xda, # character string (1 byte per character, 1 byte length indicator) + 'TIME': 0xdb, # Duration in milliseconds + 'EPATH': 0xdc, # CIP Path segment + 'ENGUNIT': 0xdd, # Engineering Units + 'STRINGI': 0xde # International character string +} + +I_DATA_TYPE = { + 0xc1: 'BOOL', + 0xc2: 'SINT', # Signed 8-bit integer + 0xc3: 'INT', # Signed 16-bit integer + 0xc4: 'DINT', # Signed 32-bit integer + 0xc5: 'LINT', # Signed 64-bit integer + 0xc6: 'USINT', # Unsigned 8-bit integer + 0xc7: 'UINT', # Unsigned 16-bit integer + 0xc8: 'UDINT', # Unsigned 32-bit integer + 0xc9: 'ULINT', # Unsigned 64-bit integer + 0xca: 'REAL', # 32-bit floating point + 0xcb: 'LREAL', # 64-bit floating point + 0xcc: 'STIME', # Synchronous time + 0xcd: 'DATE', + 0xce: 'TIME_OF_DAY', + 0xcf: 'DATE_AND_TIME', + 0xd0: 'STRING', # character string (1 byte per character) + 0xd1: 'BYTE', # byte string 8-bits + 0xd2: 'WORD', # byte string 16-bits + 0xd3: 'DWORD', # byte string 32-bits + 0xd4: 'LWORD', # byte string 64-bits + 0xd5: 'STRING2', # character string (2 byte per character) + 0xd6: 'FTIME', # Duration high resolution + 0xd7: 'LTIME', # Duration long + 0xd8: 'ITIME', # Duration short + 0xd9: 'STRINGN', # character string (n byte per character) + 0xda: 'SHORT_STRING', # character string (1 byte per character, 1 byte length indicator) + 0xdb: 'TIME', # Duration in milliseconds + 0xdc: 'EPATH', # CIP Path segment + 0xdd: 'ENGUNIT', # Engineering Units + 0xde: 'STRINGI' # International character string +} + +REPLAY_INFO = { + 0x4e: 'FORWARD_CLOSE (4E,00)', + 0x52: 'UNCONNECTED_SEND (52,00)', + 0x54: 'FORWARD_OPEN (54,00)', + 0x6f: 'send_rr_data (6F,00)', + 0x70: 'send_unit_data (70,00)', + 0x00: 'nop', + 0x01: 'list_targets', + 0x04: 'list_services', + 0x63: 'list_identity', + 0x64: 'list_interfaces', + 0x65: 'register_session', + 0x66: 'unregister_session', +} + +PCCC_DATA_TYPE = { + 'N': '\x89', + 'B': '\x85', + 'T': '\x86', + 'C': '\x87', + 'S': '\x84', + 'F': '\x8a', + 'ST': '\x8d', + 'A': '\x8e', + 'R': '\x88', + 'O': '\x8b', + 'I': '\x8c' +} + +PCCC_DATA_SIZE = { + 'N': 2, + # 'L': 4, + 'B': 2, + 'T': 6, + 'C': 6, + 'S': 2, + 'F': 4, + 'ST': 84, + 'A': 2, + 'R': 6, + 'O': 2, + 'I': 2 +} + +PCCC_CT = { + 'PRE': 1, + 'ACC': 2, + 'EN': 15, + 'TT': 14, + 'DN': 13, + 'CU': 15, + 'CD': 14, + 'OV': 12, + 'UN': 11, + 'UA': 10 +} + +PCCC_ERROR_CODE = { + -2: "Not Acknowledged (NAK)", + -3: "No Reponse, Check COM Settings", + -4: "Unknown Message from DataLink Layer", + -5: "Invalid Address", + -6: "Could Not Open Com Port", + -7: "No data specified to data link layer", + -8: "No data returned from PLC", + -20: "No Data Returned", + 16: "Illegal Command or Format, Address may not exist or not enough elements in data file", + 32: "PLC Has a Problem and Will Not Communicate", + 48: "Remote Node Host is Missing, Disconnected, or Shut Down", + 64: "Host Could Not Complete Function Due To Hardware Fault", + 80: "Addressing problem or Memory Protect Rungs", + 96: "Function not allows due to command protection selection", + 112: "Processor is in Program mode", + 128: "Compatibility mode file missing or communication zone problem", + 144: "Remote node cannot buffer command", + 240: "Error code in EXT STS Byte" +} \ No newline at end of file diff --git a/pycomm/cip/cip_const.pyc b/pycomm/cip/cip_const.pyc new file mode 100644 index 0000000..4fe3cca Binary files /dev/null and b/pycomm/cip/cip_const.pyc differ diff --git a/pycomm/common.py b/pycomm/common.py new file mode 100644 index 0000000..fc92570 --- /dev/null +++ b/pycomm/common.py @@ -0,0 +1,7 @@ +__author__ = 'Agostino Ruscito' +__version__ = "1.0.8" +__date__ = "08 03 2015" + + +class PycommError(Exception): + pass diff --git a/pycomm/common.pyc b/pycomm/common.pyc new file mode 100644 index 0000000..7e477e5 Binary files /dev/null and b/pycomm/common.pyc differ diff --git a/root-CA.crt b/root-CA.crt new file mode 100644 index 0000000..a6f3e92 --- /dev/null +++ b/root-CA.crt @@ -0,0 +1,20 @@ +-----BEGIN CERTIFICATE----- +MIIDQTCCAimgAwIBAgITBmyfz5m/jAo54vB4ikPmljZbyjANBgkqhkiG9w0BAQsF +ADA5MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6 +b24gUm9vdCBDQSAxMB4XDTE1MDUyNjAwMDAwMFoXDTM4MDExNzAwMDAwMFowOTEL +MAkGA1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJv +b3QgQ0EgMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALJ4gHHKeNXj +ca9HgFB0fW7Y14h29Jlo91ghYPl0hAEvrAIthtOgQ3pOsqTQNroBvo3bSMgHFzZM +9O6II8c+6zf1tRn4SWiw3te5djgdYZ6k/oI2peVKVuRF4fn9tBb6dNqcmzU5L/qw +IFAGbHrQgLKm+a/sRxmPUDgH3KKHOVj4utWp+UhnMJbulHheb4mjUcAwhmahRWa6 +VOujw5H5SNz/0egwLX0tdHA114gk957EWW67c4cX8jJGKLhD+rcdqsq08p8kDi1L +93FcXmn/6pUCyziKrlA4b9v7LWIbxcceVOF34GfID5yHI9Y/QCB/IIDEgEw+OyQm +jgSubJrIqg0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC +AYYwHQYDVR0OBBYEFIQYzIU07LwMlJQuCFmcx7IQTgoIMA0GCSqGSIb3DQEBCwUA +A4IBAQCY8jdaQZChGsV2USggNiMOruYou6r4lK5IpDB/G/wkjUu0yKGX9rbxenDI +U5PMCCjjmCXPI6T53iHTfIUJrU6adTrCC2qJeHZERxhlbI1Bjjt/msv0tadQ1wUs +N+gDS63pYaACbvXy8MWy7Vu33PqUXHeeE6V/Uq2V8viTO96LXFvKWlJbYK8U90vv +o/ufQJVtMVT8QtPHRh8jrdkPSHCa2XV4cdFyQzR1bldZwgJcJmApzyMZFo6IQ6XU +5MsI+yMRQ+hDKXJioaldXgjUkK642M4UwtBV8ob2xJNDd2ZhwLnoQdeXeGADbkpy +rqXRfboQnoZsG4q5WTP468SQvvG5 +-----END CERTIFICATE----- diff --git a/rootCA.key b/rootCA.key new file mode 100644 index 0000000..ea1c6e1 --- /dev/null +++ b/rootCA.key @@ -0,0 +1,27 @@ +-----BEGIN RSA PRIVATE KEY----- +MIIEpgIBAAKCAQEA43N8imKnmv+NC4MfpPlF8RMFRDPZ/PpeCbO4xXpzGemBsv5N +ODpdQ+ZiKtOmJPif64vM6a/diUMnC+VyL1VMG2tmeCtLSIv+LMvV4vZzEheVPwRW +eBH5/okS5g3lkZixau8R/vAmSql5H308nCaCgTBYQUMXVGKPoD9i4sSYmzz47Hmd +2Pt8jvFT/lzacEJrIelJAReH0oAwgnWiL2oApWBClzOpcuWHq3OWrIHZ5B9JJJyH +pKNcwnzPcfybdDFOJV+T3ktfQTXq58XR/lYNMAHocV1KzcL0whINaTNM235rKXpY +WLa0IcZQey2OsqnAq7QFJGKPUVq6gKbAiK2LmwIDAQABAoIBAQCwjFXhPM6IO1CJ +3Q/VCEBH7dGqrOzJtrUDpuMHNhLdzCiGfWoG+Ranu837HCncjLflJ7C4u2+kOeG3 +FDRblUPsDKOPJ1vaRf/XWaj98PpE0tVgAsfzj1CTSGbI94R4TSN5s3QuhM3UKlQA +Iz/GnQWzrYjzr1YOhCqj4k+pYZxq8aQ8MMdsTe20ufs+fLfX/f7I5P+aZLQpZWaB +l1Ojjbi0GgvUMTOACpRpT4gknpfLhdHQZeVw72UNFX/0/+GDFFxDp0+A+XUvSOQ2 +LiSkTNdjOJwvKfVhame+pHqaMVWQToJrMrMZmiMOMLdKmMDCEw5ea1kOcILt0PwC +PaS+FyHhAoGBAPabQmM01fwYzvfa9Db2aajViqrgRUzvKyLL9ESl0+iXsvY59KVa +TIs9LkhDEH7m/2mFJflv2sR1G0CUSGXzNYa+BXt5l1FU/DPKaJMc+vRA/97y8fhE +hk1XSXpYL+/hPreUeQWy2kn1cA66pTR+CcGzwpaazlgdc5SEQArSXtZjAoGBAOwd +b9shU5lO7uDUys9GU1CkLTd/8VkKt+GLE4VwsvgB/ORDF6n1exnmt6LLhXFcxd3h +bgLfvcPkDKfYMwOEUMzZR0ISEN8wFRoRLxgT9vGE7y2yxBJbBSaC08z0yqOIug51 +kSNm4P2uqwkuT1kicE5u7nH15jeiRlUvHBqiSf9pAoGBAOqeNiAKcZdhxu8aWgQ8 +lbOyTjZaHrSeSuzVG/V/y0dbpEEMTIxQh8hlEbZgT75caR1sNv/Egl8shxv+t45/ +QCqMeMzLlsIjV7qyVKG6Dav6dzUW8EzibOACLn7+jcTsCG5CDI32ZiW9I7pvqqNx +Ujj+nCAK8kv04TSoSgHBuca/AoGBAM2jnaXl0p91JYtfCPuZLjrPoinyHksEkL24 +mNnhG53wbUaIQHXfvMUEMe9w/dmLiTEDgwKxxt5zIaqVG2j2tkCTBALBJTyc7eP0 +D2YTDUGwG3dbeHTcHRI7YyfgExR2okSxlCSXF2EZ3RBz6tugqNtGthk+prDRfhv2 +ma2App3xAoGBAK3wHxbTeBE5pgIvFpisBL6zq+ZhDyTG3tkOEZ2KcqYEHw04rZ+z +yh4TC5VekpV3YKupFt3dUNJM5G3MgpEAaWsHbHsD0hMillcpuGAh1VHRup3J5Y+y +eg+CeKlXJK5cSREeamzKfroYnj6hPe/9HKvy1L9I2DbqESs7bzau4WyR +-----END RSA PRIVATE KEY----- diff --git a/rootCA.pem b/rootCA.pem new file mode 100644 index 0000000..cce1aef --- /dev/null +++ b/rootCA.pem @@ -0,0 +1,24 @@ +-----BEGIN CERTIFICATE----- +MIIEAzCCAuugAwIBAgIUFCudUXwBqKUNreGC28n/HyRCLZowDQYJKoZIhvcNAQEL +BQAwgZAxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4GA1UEBwwHTWlk +bGFuZDETMBEGA1UECgwKSGVucnkgUHVtcDETMBEGA1UECwwKQXV0b21hdGlvbjEO +MAwGA1UEAwwFSFBJb1QxJTAjBgkqhkiG9w0BCQEWFm5vcmVwbHlAaGVucnktcHVt +cC5jb20wHhcNMTkxMTIwMTYwMDE3WhcNMjIwOTA5MTYwMDE3WjCBkDELMAkGA1UE +BhMCVVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdNaWRsYW5kMRMwEQYDVQQK +DApIZW5yeSBQdW1wMRMwEQYDVQQLDApBdXRvbWF0aW9uMQ4wDAYDVQQDDAVIUElv +VDElMCMGCSqGSIb3DQEJARYWbm9yZXBseUBoZW5yeS1wdW1wLmNvbTCCASIwDQYJ +KoZIhvcNAQEBBQADggEPADCCAQoCggEBAONzfIpip5r/jQuDH6T5RfETBUQz2fz6 +XgmzuMV6cxnpgbL+TTg6XUPmYirTpiT4n+uLzOmv3YlDJwvlci9VTBtrZngrS0iL +/izL1eL2cxIXlT8EVngR+f6JEuYN5ZGYsWrvEf7wJkqpeR99PJwmgoEwWEFDF1Ri +j6A/YuLEmJs8+Ox5ndj7fI7xU/5c2nBCayHpSQEXh9KAMIJ1oi9qAKVgQpczqXLl +h6tzlqyB2eQfSSSch6SjXMJ8z3H8m3QxTiVfk95LX0E16ufF0f5WDTAB6HFdSs3C +9MISDWkzTNt+ayl6WFi2tCHGUHstjrKpwKu0BSRij1FauoCmwIiti5sCAwEAAaNT +MFEwHQYDVR0OBBYEFPS+HjbxdMY+0FyHD8QGdKpYeXFOMB8GA1UdIwQYMBaAFPS+ +HjbxdMY+0FyHD8QGdKpYeXFOMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL +BQADggEBAK/rznXdYhm5cTJWfJn7oU1aaU3i0PDD9iL72kRqyaeKY0Be0iUDCXlB +zCnC3RVWD5RCnktU6RhxcvuOJhisOmr+nVDamk93771+D2Dc0ONCEMq6uRFjykYs +iV1V0DOYJ/G1pq9bXaKT9CGsLt0r9DKasy8+Bl/U5//MPYbunDGZO7MwwV9YZXns +BLGWsjlRRQEj2IPeIobygajhBn5KHLIfVp9iI5bg68Zpf0VScKFIzo7wej5bX5xV +hrlX48fFgM/M0Q2zGauVPAiY1aV4FctdmfstEjoaXAlkQQUsCDTdpTjIPrnLLvd1 +lqM/pJrHKTd2pLeRpFEtPWWTJt1Sff4= +-----END CERTIFICATE----- diff --git a/rootCA.srl b/rootCA.srl new file mode 100644 index 0000000..4b9fea9 --- /dev/null +++ b/rootCA.srl @@ -0,0 +1 @@ +58D1EF99EF1B2A05A529F4605B77619D1E8D9EC6 diff --git a/server.conf b/server.conf new file mode 100644 index 0000000..062bf00 --- /dev/null +++ b/server.conf @@ -0,0 +1,14 @@ +[ req ] +default_bits = 4096 +default_md = sha512 +default_keyfile = deviceCert.key +prompt = no +encrypt_key = no +distinguished_name = req_distinguished_name +[ req_distinguished_name ] +countryName = "US" +localityName = "Texas" +organizationName = "Henry Pump" +organizationalUnitName = "Automation" +commonName = "f52c9bed0997c8f92b41bc085c20b0eaa47fbfa8f78bb86310087a24a8721401" +emailAddress = "noreply@henry-pump.com" \ No newline at end of file diff --git a/start.sh b/start.sh new file mode 100644 index 0000000..4e42f8f --- /dev/null +++ b/start.sh @@ -0,0 +1,39 @@ +# stop script on error +set -e + +#for M1 if no openssl then opkg update, opkg install openssl-util, opkg install coreutils-sha256sum, opkg install curl +if ! command -V curl > /dev/null 2>&1; then + printf "\nNo curl assuming no ssl tools, curl, or git\n" + opkg update + opkg install openssl-util + opkg install coreutils-sha256sum + opkg install curl + opkg install git + opkg upgrade libopenssl +fi +#for RPi +if ! command -V git > /dev/null 2>&1; then + apt-get update + apt-get install git +fi +# Check to see if root CA file exists, download if not +if [ ! -f ./root-CA.crt ]; then + printf "\nNO ROOT CERTIFICATE\n" + curl https://www.amazontrust.com/repository/AmazonRootCA1.pem > root-CA.crt +fi + +if [ ! -f ./rootCA.pem ]; then + printf "\nNO HPIoT ROOT CERTIFICATE\n" +fi + + +# install AWS Device SDK for Python if not already installed +if [ ! -d ./aws-iot-device-sdk-python ]; then + printf "\nInstalling AWS SDK...\n" + git clone git://github.com/aws/aws-iot-device-sdk-python.git + cd aws-iot-device-sdk-python + python setup.py install + cd ../ +fi + +python ./main.py -e a3641et952pm28-ats.iot.us-east-1.amazonaws.com -r root-CA.crt -p 8883 \ No newline at end of file diff --git a/start.sh.old b/start.sh.old new file mode 100644 index 0000000..5014137 --- /dev/null +++ b/start.sh.old @@ -0,0 +1,36 @@ +# stop script on error +set -e + +# Check to see if root CA file exists, download if not +if [ ! -f ./root-CA.crt ]; then + printf "\nNO ROOT CERTIFICATE\n" + curl https://www.amazontrust.com/repository/AmazonRootCA1.pem > root-CA.crt +fi + +if [ ! -f ./rootCA.pem ]; then + printf "\nNO HPIoT ROOT CERTIFICATE\n" +fi + +if [ ! -f ./deviceCert.pem ]; then + openssl genrsa -out deviceCert.key 2048 + openssl req -config server.conf -new -key deviceCert.key -out deviceCert.pem + openssl x509 -req -in deviceCert.pem -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out deviceCert.pem -days 365 -sha256 +fi + +# install AWS Device SDK for Python if not already installed +if [ ! -d ./aws-iot-device-sdk-python ]; then + printf "\nInstalling AWS SDK...\n" + git clone https://github.com/aws/aws-iot-device-sdk-python.git + pushd aws-iot-device-sdk-python + python setup.py install + popd +fi + +if [ ! -f ./deviceCertAndCACert.pem ]; then + cat deviceCert.pem rootCA.pem > deviceCertAndCACert.pem +fi + + +# run pub/sub sample app using certificates downloaded in package +printf "\nRunning pub/sub sample application...\n" +python aws-iot-device-sdk-python/samples/basicPubSub/basicPubSub.py -e a3641et952pm28-ats.iot.us-east-1.amazonaws.com -r root-CA.crt -c deviceCertAndCACert.pem -k deviceCert.key \ No newline at end of file diff --git a/utilities.py b/utilities.py new file mode 100644 index 0000000..0f1c223 --- /dev/null +++ b/utilities.py @@ -0,0 +1,37 @@ +def unmarshal_dynamodb_json(node): + data = dict({}) + data['M'] = node + return _unmarshal_value(data) + + +def _unmarshal_value(node): + if type(node) is not dict: + return node + + for key, value in node.items(): + key = key.lower() + if key == 'bool': + return value + if key == 'null': + return None + if key == 's': + return value + if key == 'n': + if '.' in str(value): + return float(value) + return int(value) + if key in ['m', 'l']: + if key == 'm': + data = {} + for key1, value1 in value.items(): + if key1.lower() == 'l': + data = [_unmarshal_value(n) for n in value1] + else: + if type(value1) is not dict: + return _unmarshal_value(value) + data[key1] = _unmarshal_value(value1) + return data + data = [] + for item in value: + data.append(_unmarshal_value(item)) + return data \ No newline at end of file diff --git a/utilities.pyc b/utilities.pyc new file mode 100644 index 0000000..0639fef Binary files /dev/null and b/utilities.pyc differ